back to article If you thought training AI models was hard, try building enterprise apps with them

Despite the billions of dollars spent each year training large language models (LLMs), there remains a sizable gap between building a model and actually integrating it into an application in a way that's useful. In principle, fine tuning or retrieval augmented generation (RAG) are well-understood methods for expanding the …

  1. HuBo Silver badge
    Gimp

    May the Schwarz Digits¹ Instincts² be with you³

    A tokenizer-free HATs-off⁴ to Aleph's Hierarchical Autoregressive Transformer for plus-sized language model⁵ fairness, sovereignty, and freedom from rigid pre-fitted vocabularies⁶ and girdles!

    Let's zerschlagen the vorgefertigte!⁷

    (Notes: ¹-IT and Digital Division of Schwarz Group, ²-AMD MI300X as in linked Aleph Alpha benchmark figure, ³-homage to Mel Brooks, ⁴-HAT=Hierarchical Autoregressive Transformer, Aleph's tokenizer-free tech, ⁵-LLM, ⁶-per Aleph's tech blog, ⁷-smash the prefab!)

    1. Jou (Mxyzptlk) Silver badge

      Re: May the Schwarz Digits¹ Instincts² be with you³

      Hats off, no AI could have summarized bullshit this good.

  2. uv

    "Specific knowledge should always be documented...

    ...and not in the parameters of the LLM," Andrulis said.

    This.

    Yet, an awful lot of "other people's money" is wasted by speculants on the premise that statistical 'prediction' of next word of a given sequence is the magic answer to _everything_. Well, some may even know that it's bollocks, but they hope to cash in before the crash.

  3. deadlockvictim

    AN AI might be able to summarize a text quite well but can it summarise it quite well too?

    Given how racist & sexist tendencies are carried through, I'm not so sure.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like