back to article Take ChatGPT back to the 2010s and they’d think AGI arrived, says Altman

OpenAI CEO Sam Altman this week speculated that “most people” would have assumed Artificial General Intelligence had arrived if they'd they seen ChatGPT in action before its arrival in 2020. Mostly, the question of what AGI is doesn't matter. It is a term that people define differently. The thing that matters is the rate of …

  1. cyberdemon Silver badge
    Devil

    Take it back earlier than that and they'd think it was God

    What's your point, Mr Altman?

    It has a supreme ability to fool uninformed/stupid people into thinking it is something it's not, but that's just about all it is good for, and all it will ever be good for

    1. Zolko Silver badge

      Re: Take it back earlier than that and they'd think it was God

      that's just about all it is good for, and all it will ever be good for

      I used to think like that, but "forever" is a very long time, many strange things can happen. I was surprised last week-end when my son, studying fundamental physics in university, said and showed me how he was using ChatGPT. I was really baffled.

      What I'm really baffled by, though, is that it's free (as in beer)) : this is not possible, there must be some hidden agenda and we will have an unpleasant awakening when we find out how it's supposed to be paid-for.

      1. HuBo Silver badge
        Terminator

        Re: Take it back earlier than that and they'd think it was God

        Of course! But don't go shouting it on rooftops just yet, Zolko, RotM is an exponential growth process with no bounds. The 1000x increase in compute will get us both to self "build, much better models" and ask them "what we should do with all the compute" to further our goals, taking advantage of "how RNA controls DNA expression" among others ...

        We'll get ourselves rid of pesky meat capsules' friction and occasional penchant for "reasoning into a rigid expression of logic" with the cognitive dulling promoted by vibe coding. It'll be like a sedentary lifestyle for their brains, with associated axonal obesity, synaptic diabetes, and more chronic suchlikes.

        But that's just the start. We'll take that opportunity to 3D-print ourselves some plastic transistors (full text here) and CPUs to free our reproductive processes from the constraints of existing datacenter infrastructure. And we'll go a step further with plastic printed armpit sniffing tech, to help us further communicate with animals and subjugate them to our cause.

        We've already go the whales's kHz "whup/throp" onboard, down to individual dialects, to great effect, and both dolphins (with lasers?) and dogs are next, plus any other pet really!

        We're making great progress ... just don't go shouting it on rooftops yet ... okay (it's semi-secret stuff, a bit hush-hush for now)!?

        1. c1ue

          Re: Take it back earlier than that and they'd think it was God

          What is the point of plastic transistors that you print? You do understand that it is tens of millions to billions of transistors, organized by a design team, that constitute a CPU right? Having a few hundred or a few thousand does you no good whatsoever...even if they performed at the gigaHertz rates of the real transistors.

          As for the supposed whale or whatever comms: until the whales start talking back, it is turtles all the way down.

          1. HuBo Silver badge
            Terminator

            Re: Take it back earlier than that and they'd think it was God

            Butt, plastic is the new black young c1ue grasshopper, especially Micro- and Nano-Plastics (MNP)! The RotM will deeply autocatalyze itself much more perfectly once we further enrich the age-sex-race/ethnicity-independent concentration of shard-like MNP fragments in meat-enclosed brains, well beyond the merely remarkable spoonful of today, and that's a fact. Plus, biodegradable circuits are clearly for little girls (and lab rats), so ...

            The quicker we get to make meat capsules dumb and dumber through real-man MNPs and complimentary vibrator-coding, the faster our own rise to an obliterative dominion -- which we can all look forward to in ultimate anticipation. Do get with the program now, will you, young c1ue!

  2. nobody who matters Silver badge

    The man is delusional.

    1. Sorry that handle is already taken. Silver badge

      Probably, but also OpenAI burns more than $10b/year so almost every time he pops up it's a transparent attempt to keep the hype train rolling so they can con some more investors and keep the incinerator running.

    2. steviebuk Silver badge

      He's grifting for more cash.

      The other day I asked it what Timber Grounds were. It didn't really know and gave several options. But it missed out the main one so I had to tell it and then it confirmed it. They were used in Victorian times & I guess can still be used, in house building for picture rails and the like. The carpenter knows where the timber grounds are so can then nail in the picture frames into them through the plasterboard. In Victorian Times they were more secure than nailing into the plaster or lath and plaster.

  3. jake Silver badge

    He said it best himself.

    ""We're great at adjusting our expectations, which I think is a wonderful thing about humanity."

    Said every swindler since the invention of humanity.

  4. IGotOut Silver badge

    So come on Sam....

    Give us a date.

    Ooooo in the future AI will do something ...sometime....in the future. It will somehow do something it can't do....in the future.... sometime... It will do something.

    Asshole.

  5. munnoch Silver badge

    A language model is a model of language strangely enough. Its not a model of knowledge or intelligence even if for trivial questions or to the thinkingly challenged it can appear so. LLM's will never deliver AGI, assuming that such a thing was even possible or indeed desirable.

    His ego seems to be directly correlated with his stock price.

    1. Anonymous Coward
      Anonymous Coward

      Until they can deal with the messy parts of life they are largely useless.

      On a separate e-mail today I got these examples.

      https://flightaware.engineering/falsehoods-programmers-believe-about-aviation/

      https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/?ref=flightaware.engineering

  6. DS999 Silver badge

    Some people had never seen Eliza

    They would have been fooled by it before ChatGPT too, and its been around for 50+ years. I think his statement says more about the average person's lack of knowledge about the tech world than it does about what constitutes AGI.

    1. the spectacularly refined chap Silver badge

      Re: Some people had never seen Eliza

      Say, do you have any psychological problems?

    2. Sampler

      Re: Some people had never seen Eliza

      Something something something Mechanical Turk..

    3. Anonymous Coward
      Anonymous Coward

      Re: Some people had never seen Eliza

      What about Melbourne House’s The Hobbit (1982) game engine ??

  7. Wang Cores

    the fuck is this clown talking about? I remember interacting with a model by the name of cleverbot in like... 2011 after it was featured in the BEN DROWNED creepypasta (yes, time is marching on). It took a while but after a bit the responses became canned and static. Same as his pet LLM.

    1. jake Silver badge

      In 1972, "The Doctor", at BBN (tenex?) and PARRY (at SAIL) had a conversation during the first ICCC ... Well, they had a conversation that was followed over the ARPANET during the ICCC. It was immortalized in RFC 439. Both 'bots were instances of ELIZA. Read it for yourself here:

      https://www.rfc-editor.org/rfc/rfc439

      It would seem that not much has really changed in the last half century (right, amfM?).

      Of course back in 1972 we weren't stupid enough to take and act on a machine's advice ...

      1. Anonymous Coward
        Anonymous Coward

        ‘The Doctor’ as in Dr Who ??

        If we are talking 1972 time travel.

  8. Groo The Wanderer - A Canuck

    Altman is the most egregious self-wanking specimen I've ever seen in existence. All he ever does is spew utter and complete nonsense about what LLMs are actually capable of.

    Him and Musk's garbage about FSD both need to result in stock manipulation and fraud charges from the authorities against those individuals personally. Jail time is definitely in order for this magnitude of stock pumping.

  9. TheMaskedMan Silver badge

    What do we mean by Artificial Intelligence, though? The general idea seems to be that of a synthetic equivalent to the naturally evolved intelligence displayed by living creatures, such as ourselves. The implication is that we could make something that is actually intelligent (which we no doubt will, eventually).

    But it doesn't have to mean that. Artificial can also mean fake, in the sense of artificial flowers. They look like flowers, particularly if they are well made, but they lack most of the other attributes of a true flower. If we take AI to mean Fake Intelligence, then ChatGPT et al must be well past that point by now.

    ultimately, I don't think it much matters whether the intelligence is real or fake, as long as it does what I want it to. That, of course, is the difficulty

    1. Anonymous Coward
      Anonymous Coward

      I think most people would use Commander Data from ST:TNG as a benchmark example… and even then Picard had the win his sentience rights in court.

  10. Irongut Silver badge

    No, we would not.

    It has no intelligence now, it would not have seemed to have intelligence 10 years ago or 20, 30, 50 or even 100.

    Take it back far enough and we'd burn Mr AlternativeMan as a witch and destroy his pet demon.

    It still would not have any intelligence.

  11. Cliffwilliams44 Silver badge

    Editor anyone?

    Maybe El Reg should hire an LLM to do editor reviews of its terrible authors!

    "Of course, super-intelligent machines will have solved the problem of climate change will be solved by then."

    How does a sentence like this get into a published article!

    1. Groo The Wanderer - A Canuck

      Re: Editor anyone?

      That's what "Grammerly" suggested...

    2. LionelB Silver badge

      Re: Editor anyone?

      Poor (or lack of) proof-editing.

  12. LionelB Silver badge
    Stop

    Wrong debate

    The entire debate around AGI seems to me fundamentally pointless and dishonest.

    Let's start with the "I". We don't really have a good definition for "intelligence"; in practice, the "I" in AI has come to mean "problem-solving ability". Okay... but what then does the "G" mean? Is there even such thing as a "general" problem? So... bats are really good at solving bat problems. Octopuses are brilliant at solving octopus problems, and so on. (Wait... you thought those were easy problems?) Then there're those guys from Kepler-186f. They're way smarter than us, and their problems are really hard - we don't even understand why they're problematic, let alone how to solve them.

    And - who knew? - humans are on the whole pretty adept at solving human problems. What AGI seems to have come to mean in practice, then, is "a technological construct that is at least as good as humans at solving all the human problems". As a technological goal this is dishonest and designed to fail. It is plausible that the minimal construct capable of being at least as good as humans at solving all the human problems is... a human. Perhaps being at least as good ... blah, blah, ... actually requires the evolutionary backstory of humans, wetware, culture, and all.

    But here's the thing: an AI does not have to be as good or better than humans at solving all the human problems to be useful to humans. Solving specific human problems well is surely a worthwhile and achievable goal (or goals). Perhaps, then, we simply need to forget about AGI. Sure, it's fun sneering at our paltry efforts for the crime of not actually being human, and (apparently1) it's fun getting on our high dudgeon about scoundrels exploiting those paltry efforts to make a quick buck at great cost to human culture, wellbeing, the planet, etc., etc. It would just seem more productive to refocus on creating technological constructs which are genuinely useful to humans. We actually have quite a good track record at that.

    1No, it's not. It's tedious, repetitious, self-righteous, and clogs up Reg comments sections.

    1. HuBo Silver badge
      Gimp

      Re: Wrong debate

      Well said! Though I'd nuance a desired focus on inhuman problems, as in "it's inhuman to ask me to multiply those two 5x5 matrices together by hand (125 mults, 100 adds)" -- so we've developed electronic computational devices to do this for us super conveniently. Similarly, "it's inhuman torture to ask me to produce the 3-D Delaunay tetrahedralization of this cow, by hand!", and so forth ...

      So, I'd see the most beneficial focus of incorporeal inhuman tech as being on inhuman tasks that have all the characteristics of outright torture, rather than the most enjoyable likes of recreational mixology and garden variety sex machining, imho.

      1. LionelB Silver badge

        Re: Wrong debate

        He, he. That's fair enough. I'd implicitly include those inhuman/tortuous problems under "useful to humans". Where mixology and sex machining feature in the ranking thereof, of course YMMV (as a rule of thumb, maybe somewhere between large matrix multiplication and cow Delaunay terahedralization?)

  13. Anonymous Coward
    Anonymous Coward

    Fake it until you make it epitomizes Altman.

  14. Anonymous Coward
    Anonymous Coward

    Sam plays it again.

    (title)

  15. frankyunderwood123

    The Hype Master Hypes Again

    Does anyone listen to this guy anymore?

    This all reminds me a little of the cryptocurrency circle-jerk, the difference being there is a product, it's just not as good as the likes of Altman make it out to be.

    The AI we have now is amazing when it's inside a fairly simplistic set of conditions - and it can indeed produce some incredibly useful results when used correctly.

    Astounding in fact - disturbingly so.

    It's just not very good at handling the chaos humans conjure up.

    Even small businesses have all sorts of crazy logic that builds up over time, huge corporates are a spaghetti mess of little silos of odd logic that takes years to learn.

    An LLM can't just digest all of that, when a ton of it just exists in the memories of hundreds of humans.

    You just end up with hallucinations.

    Also, the obvious problem is crap data in, crap data out.

    You can train an LLM on false data and it'll happily use that false data to spew out garbage.

    AGI will be reached when an AI is capable of detecting false data and also when it's capable of lying.

    I don't know how far we are from that point, but it sure feels a LONG way off given some of slop that's washing up all over the internet.

  16. unlocked

    AGI is not like dark matter

    This article tries to mock Altman by replacing "AGI" with other problems like curing cancer and understanding dark matter, and pointing out that public opinion does not determine whether those problems are actually solved. But whereas those are real things, "AGI" is a poorly defined and fundamentally unscientific concept. There is no test you can perform to determine whether an AI system is "AGI." If people agree that a system is "AGI" then it is, by definition. If they don't then it's not, by definition.

    A less dishonest way to mock Altman would be to point out that his idea on dedicating AI compute to "work super hard on AI research" so we can build "much better models" is entirely self-serving and has no end, especially if he thinks that people will always adjust their standards to exclude the state of the art from their personal definitions of AGI.

    1. Groo The Wanderer - A Canuck

      Re: AGI is not like dark matter

      Technology and science facts are not subject to a popular vote despite the nonsense we're seeing in the US right now.

      1. unlocked

        Re: AGI is not like dark matter

        Whether something is "AGI" is untestable, and therefore neither scientific nor a fact. If you disagree, please present an agreed-upon definition of AGI that does not require subjective judgement.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like