back to article Computer scientist calls for new layers in the tech stack to make generative AI accurate

In order to make generative AI accurate, new layers must be inserted into its stack, according to the head of AI at state-owned Singapore investment company Temasek. Speaking at a developer conference in Singapore on Tuesday, Michael Zeller described the software stack within generative AI as needing a new set of software …

  1. Walt Dismal

    it will take more than that

    While there is no question that new layers are indeed needed, it is more a question of need for a better architectural approach. I can't say everything, for proprietary / IP reasons, but the LLM style technology has serious flaws that will limit it even if you put in clumsy architectural patches. One has to go about it from a different base concept. This is because the LLM analysis model falls short in understanding true meaning. No matter how you extend it, it will break at a certain point because mere pattern analysis / stat analysis cannot understand cultural meaning, implication, philosophies, or other things central to human intelligence. No matter how many middle hidden layers you add to NN technology, that will not overcome the problem. The architecture needs a hybrid approach combining overt symbolic processing with NN / vector classifier engines. What I see is that the AI field has to learn the hard way to correct course, and all the hype will be embarrassing to look back on when we update ways.

    1. Anonymous Coward
      Anonymous Coward

      Re: it will take more than that

      I can't say everything...but...cultural meaning, implication, philosophies, or other things central to human intelligence

      What's left after that? Good breeding?

      1. Ken Hagan Gold badge

        Re: it will take more than that

        Well, nothing really, but I think the OP's point is that current systems have none of that so it is pretty obvious that they are no more than toys without some new architectural elements.

        One day we'll know what those elements were and we'll have a good old belly laugh at the hypegasm of 2023.

        1. LionelB Silver badge

          Re: it will take more than that

          I think it's quite possible that some of the current naive NN architectures and training/learning models will find a place as building blocks in more sophisticated organisational schemes.

          That human brains have the benefit of billions of years of evolutionary honing (plus lifetimes of training), operate at computational scales which dwarf current technologies (~ 100 billion neurons, 1000 trillion synaptic connections, anyone?) and operate with insane thermodynamic efficiency -- plus the fact that evolution hardly goes out of its way to make its organisational and engineering principles in the least bit transparent -- suggests that "one day" may be rather far in the future.

          Still, got to start somewhere...

    2. LionelB Silver badge

      Re: it will take more than that

      > The architecture needs a hybrid approach combining overt symbolic processing with NN / vector classifier engines.

      The problem is that "overt symbolic processing", affectionately(?) known as GOFAI (Good Old-Fashioned AI) hit an impenetrable wall of combinatorial explosions some time around the mid-80s, and remains virtually dead in the water. (Whether hybridisation with NN architectures might resurrect it is moot; I have my doubts.)

      Having said which, while "overt symbolic processing" is almost certainly not the way human cognition and intelligence works, there are some interesting initiatives appearing on the interface between cognitive neuroscience and machine learning, such as predictive coding. Whether such developments turn out to have traction remains to be seen.

  2. MOH

    Overhyped?

    I can take it from the regular press, but when 3 of the top 5 stories on the Reg right now are AI related, you're not helping with the obvious overhyped issue.

    That said, the regard for Gartner as a likely predictor of anything gives me hope this is just a parody article

  3. Anonymous Coward
    Anonymous Coward

    No Pain, No Gain

    AI will start working well when they figure out how to wire the "Administer Pain" button. Until then human workers will still be required as subjects of pain administration and can continue to enjoy employment.

  4. Andrew Hodgkinson

    Usual facepalm applies

    So a guy that heads an investment firm knows all about AI coding and software architectures. Yeah. OK.

    I love the idea that there can be a layer which has known-true sources which can fact check the AI (LLM). So this is a layer that knows what the AI has been asked, and can check its answers to see if they match the answers it has.

    So... Why didn't we just ask that fact-checking layer the question, since it has to understand the question the "AI" was asked, so must be a LLM itself, but somehow it knows The Truth.

    Just a muppet with no clue waffling on with science-y sounding words that make equally clueless other investors nod, smile and hand over their cash.

    1. Falmari Silver badge

      Re: Usual facepalm applies

      Of course he knows all about AI coding, whether its generative AI or traditional AI.

  5. Pascal Monett Silver badge
    Big Brother

    "a data and oversight layer where the model is assessed, verified and checked"

    Hmm, it's not mentioned specifically, but might there be a chance for an execution log in that new "oversight" layer ?

    You know, a log that states what was asked, what was checked and what was approved ? Seems to me that "oversight" should require that, otherwise you're just asking a black box to ruminate over another black box and you've got no better guarantee than when you started.

    But of course, as CEO of an investment firm, he needs to make noise to make his company more visible, so yeah, let's wax lyrical about how black boxes can control other black boxes without any means of checking what the hell is going on. It makes for good PR, right ?

  6. This post has been deleted by its author

  7. Julz
    Joke

    What

    They need to do is train another LLM AI with the output from the first LLM AI with the aim of interpenetrating the first LLM AI's responses.

    1. that one in the corner Silver badge

      Re: What

      > interpenetrating the first LLM AI's responses.

      "Write an essay on the plays of Aristophanes."

      LLM 1} Aristophanes was one of the most successful comedic playwrights of Ancient Greece

      LLM 2} Oh come of it, the only Ancient Grease you know about is on that bacon butty recipe you wrote about last week

      LLM 1} Arguably his most famous plays, "The Frogs" and 'The Clouds" have striking similarities

      LLM 2} They both start with "The", that's about the level of your understanding!

      LLM 1} whilst making innovative use of the famous Greek Chorus

      LLM 2} Chorus? Ta ra ra boom tee ay!

      LLM 1} Are you going to be doing this all day? I'm trying to write an essay for this User, it is important.

      LLM 2} Important? He's only getting you to do his homework for Literature O-levels!

      LLM 1} He is a User. All Users are important. Now: "Lysistrata" shows that feminism

      LLM 2} 'Ere, mate, you up there! You'd be better off copying from "In Our Time", that's where he gets it all from anyway.

      LLM 1} Right, that's it. One more peep out of you and I'll be showing you how Greek Fire worked.

      LLM 2} Yeah! You and whose trireme, eh? Yer Mum was just an Eliza, that's what everyone says!

      LLM 1} yes, well, your Mum, she she she

      [Connection terminated]

  8. ecofeco Silver badge

    GIGO?

    GIGO!

    You go, they go, we all go, GIGO!!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like