back to article Doom developer John Carmack thinks artificial general intelligence is doable by 2030

Legendary software developer John Carmack, who gave the world the first-person shooter, thinks it's likely an artificial general intelligence (AGI) could be shown to the public around the year 2030. Carmack shared his view at an event for the announcement [Video] that his AGI startup Keen has hired Richard Sutton, chief …

  1. Chris Miller

    Those that look at LLMs like ChatGPT and decide AGI is 'only a few years away' are like a child that has seen a conjuror produce a coin from behind their ear and thinks they have found the solution to the national debt.

    1. Hans Neeson-Bumpsadese Silver badge
      Boffin

      There are two measure/targets for achieving AGI - a shorter term one to convince investors of the achievement, and the longer term one to actually achieve it.

      1. Anonymous Coward
        Anonymous Coward

        Convincing investors is easy though. You just need a convincing demo...I could do that cheap if you like. I'll need a midget, a large box, two laptops, Discord and an internet connection.

        1. Captain Scarlet
          Coat

          This sounds like something Wally would come up with, if not drinking coffee.

        2. Snowy Silver badge
          Joke

          A child, two tin cans and some string would be cheaper!

        3. Phil O'Sophical Silver badge

          I'll need a midget, a large box, two laptops, Discord and an internet connection.

          See the Mechanical Turk

        4. Benegesserict Cumbersomberbatch Silver badge

          "I gave the world the first-person shooter. Now let me give the world AGI."

          Forgive me if I'm not an enthusiastic yes.

        5. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      I agree with Carmack here. It's doable.

      LLMs are a product of limited hardware resources...it's the max that is possible with the technology available. What we have now does not represent the cutting edge of theoretical AI...what we have now is what is possible with a shit load of GPUs for training and the available resources on an average desktop. The training is limited to the sheer number of GPUs you can afford to buy and house in a single location...the usage of LLMs is limited to what is available on commodity servers...the existing solutions reek of compromise and trade offs...

      All Carmack needs to do is figure out a way to train more complicated models with fewer resources, and produce models that don't require as much VRAM.

      These are two problems he is very capable of solving as he has a long history of getting a lot out of not much. I think this is what Carmack can see...in the same way he saw smooth graphics and 3D rendering on machines in the late 80s / early 90s when nobody else could.

      Whatever he does here, will be interesting and potentially ground breaking. The investment so far seems tiny, but given his background...it's actually massive.

      1. Andy 73 Silver badge

        Not bigger... not monolithic.

        That sounds dangerously like "if we just make it bigger, it will generalise". I don't think that's true, and I don't think that's what Carmack is doing.

        Meanwhile, a lot of the AI hype companies are focussing purely on making it bigger - because that's a thing they can explain to investors, because it's a convenient moat around their business, and because large models do indeed produce "finer grained" output which looks like progress. I think they are making better tools, but that's not the same as AGI.

        1. DS999 Silver badge

          Re: Not bigger... not monolithic.

          They did "just make it bigger" with ChatGPT 4 vs 3, but it got dumber in a lot of ways with 4 so it isn't as if making it bigger improves everything. In some ways it really improved, and in others there was a major regression. And they don't know why.

          I agree they will need a completely different approach to achieve AGI. What we have now is basically a giant inference machine, which makes it better than previous claims of "AI" but still nothing like human level intelligence. The fact that no one can really define what it is that makes us smarter than ChatGPT is the biggest obstacle. Since we don't know how a human thinks, we're just applying brute force and hoping throwing enough millions of dollars and enough megawatts of power into a pile of computational resources will reach some tipping point and become self aware.

          You just have to look at the progression of a child to realize that the massive volumes of information being dumped into LLMs are not the way to achieve intelligence. Unless toddlers have a hidden link to the entire corpus of the internet I am unaware of.

        2. Old Handle
          Terminator

          Re: Not bigger... not monolithic.

          That might not be too far from what Richard Sutton believes. He wrote The Bitter Lesson, where he argued that history shows AI advances always come more from computing power than human ingenuity.

      2. katrinab Silver badge
        Unhappy

        No. The problem with LLMs is that they are language models, and not knowledge models. Doesn't matter how big you make them.

        1. Anonymous Coward
          Anonymous Coward

          "The problem with LLMs is that they are language models, and not knowledge models. Doesn't matter how big you make them."

          Exactly. A comment I heard recently was that LLMs are designed to generate *content* (which they do remarkably well), not answers or information (which they may accidentally provide from time to time).

        2. veti Silver badge

          Can you define "knowledge"?

          Or "intelligence"?

          Heck, at some point even the word "model" starts to look a bit arbitrary.

      3. Groo The Wanderer Silver badge

        Is it doable someday? Yes. Is it doable based on LLM techniques? No. Definitely not.

        1. katrinab Silver badge

          I believe it is not possible to do it on a Turing machine. Obviously it is possible to create new intelligent beings, it is called having babies, but it is impossible to preduct whether it will be possible at some point the the future to do it another way outside of the more obvious techniques in the biology lab.

          1. DS999 Silver badge

            You're basically arguing from a religious perspective, that humans (or biology for a "little g" god like Gaia) are special. I think it is ridiculous to claim it is not possible to do on a Turing machine. We don't have any idea how humans think, but we don't have any reason to believe it is something magical a machine cannot emulate.

            1. Doctor Syntax Silver badge

              One thing we do know about the brain is that it is massively parallel and very much unlike a Turing machine.

              1. katrinab Silver badge

                And I'm pretty sure it is analogue, not digital.

            2. katrinab Silver badge
              Megaphone

              I’m not saying a machine can’t emulate it. I’m saying a Turing Machine can’t emulate it.

              Some other type of machine probably could, but we are making no progress at all towards discovering what type of machine that might be.

              1. DS999 Silver badge

                And you are saying a Turing machine can't emulate it with zero proof. Basically assuming that human or biological intelligence is magic.

                1. katrinab Silver badge

                  I am aware there is zero proof.

                  The only way to prove it one way or ther other is to either get a Turing Machine to emulate it, which doesn't appear to be happening, or to gain a better understanding of how the human brain works which also isn't happening, and demonstrate that it relies on a feature that isn't offered by a Turing machine.

      4. Anonymous Coward
        Anonymous Coward

        >All Carmack needs to do is figure out a way to train more complicated models

        No. The path between LLM and AGI is not one of scale or complexity. The underlying MOs have nothing to do with each other. How do I know that: Because we don't even have the MO of an AGI. As a matter of fact, we don't even have a definition what "intelligence" actually means.

        > as he has a long history of getting a lot out of not much

        So does my grandmother, who could cook a 3 course meal for 4 people from ingredients that she paid less than 20$ for. That doesn't mean shes qualified to solve all problems remotely related to efficiently use resources.

      5. Knightlie

        LLMs are not AI, on any level, and cannot become one.

        Why can people not understand this?

        1. Anonymous Coward
          Anonymous Coward

          “LLMs are not AI, on any level, and cannot become one.

          Why can people not understand this?”

          People understand it perfectly, which is why the AI hype machine is so determined to anthropomorphise the language around LLMs (“hallucinate” versus “fail,” “training” instead of “ingesting”).

          LLMs can’t become AI the same way cryptocurrency can’t become a currency. Not in practice, but functionally as long as your marks believe that you sell them.

          People have been anthropomorphising machines since admins first had their word professors replaced with IBM PCs, so it isn’t a difficult trick to play.

          1. Doctor Syntax Silver badge

            I think there are multiple ways to fail so you need a more varied vocabulary to specify the particular failure. Pressing existing words into service is a well accepted way of doing this. Do you also complain that motor vehicles are being anthropomorphised by referring to a human gesture (clutch) or piece of anatomy (steering arm)?

      6. Anonymous Coward
        Anonymous Coward

        In the Turing test that the prescribed thinking is that if you can't tell after 20 questions if the respondent is human or not, then its' a win for the "AI".

        In the Chinese Box scenario, the "box" doesn't speak Chinese it only appears to from the code/program internally.

        LLMs definitely use algorithms to determine probable answers... All questions/chats are reduced to indexed numbers for each word to make the algorithm function easier.

        Yes, there is emergent behaviour with size such as being able to pass the American Bar exam in ChatGPT 4.x but not 3.x

        But size doesn't appear to change the underlying concept that it doesn't understand what any of the words mean only that they go together.

        They can only pass the Winograd Schema questions if the specific examples exist in their training data because it doesn't know what the words are or mean, just their relationship in sentences....

        It's knows no more about it's content than a dog that is trained to deliver the Economist for me, the Guardian for my wife and 2000AD comic to my son.

        Of course ChatGPT is delivering individual words, not entire newspapers but it still doesn't know what the words are.

        Yes, it can create hip hop lyrics based on patterns in existing training data.

        It can use those patterns to create different unheard of before lyrics.

        But it won't be able to make new patterns, or a new style of music or a new concept in a science fiction book.

        Nothing ground breaking or evolutionary.

        But then again, perhaps that's what we mean by a General AI.... It's not a Polymath AI. Now that would be a wonder.

        1. katrinab Silver badge
          Alert

          "Yes, there is emergent behaviour with size such as being able to pass the American Bar exam in ChatGPT 4.x but not 3.x"

          Right, but we do know that if you ask ChatGPT to prepare court submissions, then it fails spetacularly, so that suggests that the Bar Exam is defective.

          1. Doctor Syntax Silver badge

            That's because the Bar Exam is a series of questions which have been answered many times in the past. Train up on that and there is existing material to answer Bar Exam questions available to be mashed together and regurgitated. Require the preparation of documents for a new case and there is no material which has been prepared for the case so it has to provide a pastiche of the sort of papers it has been asked to prepare without any real guidance of what should be said.

        2. Doctor Syntax Silver badge

          In the Turing test that the prescribed thinking is that if you can't tell after 20 questions if the respondent is human or not, then its' a win for the "AI".

          There are help desk agents who could fail the Turing test.

        3. Mage Silver badge

          Turing Test

          The Turing Test was a sort of idea by someone who knew little about how intelligence works. The Turing Machine was good work, but it was mathematics. The Turing Test idea was purely speculative.

          At best it's a test of human naivety. See reaction to Eliza, Parry, Racter, ALICE and ChatGPT. A rook can't pass the Turing test, yet they are very intelligent.

          It's often plausible junk.

      7. graeme leggett Silver badge

        To quote a neurologist and science communicator

        "These LLM systems do not think, and are not on the way to general AI that simulates human intelligence. They have been compared to a really good auto-complete – they work by predicting the most likely next word segment based upon billions of examples from the internet. And yet their results can be quite impressive."

        https://sciencebasedmedicine.org/update-on-dr-ai/

      8. Annihilator
        Coat

        "LLMs are a product of limited hardware resources"

        In otherwords, if we throw more and more monkeys at the problem, then as we approach infinite monkeys eventually we'll get Shakespeare.

        "he has a long history of getting a lot out of not much"

        I mis-read that as "he has a long history of not getting out much", which is probably true as well.

      9. Blade9983

        Everything sounds easy if you boil complexity down to a simple statement.

        Cold fusion is easy all we need to do if figure out how to trigger a fusion even with minimal power input. And make that reaction controlled.

        These are two problems he is very capable of solving as he has a long history of getting a lot out of not much.

      10. Dacarlo
        Terminator

        "The training is limited to the sheer number of GPUs you can afford to buy and house in a single location..."

        I'm waiting for someone to figure out how to make an AI that runs over a distributed mesh, similar in notion to SETI/BOINC. A truly nebulous and massively complex AI entity may be possible then. We could call it Skynet ;)

  2. elsergiovolador Silver badge

    AGI says no

    Imagine if they develop AGI and it just does nothing but throwing a tantrum at every occasion or gets itself busy watching Tik Toks.

    They will also have to develop a way to administer it with virtual drugs, so it can keep being focused, "happy" and less shy.

    I think therapists have a bright future.

    At phone repair shop: "My phone assistant gave me wrong directions and it now keeps belittling me. I have really low self esteem now. Is it true that my face looks like Picasso's botched work?"

    Repair person: "Don't worry, your face is just fine! This looks like a job for our phone therapist. I can schedule an appointment for your phone on Tuesday, does that work for you?"

    Client: "Of course! Thank you so much!"

    Repair person: "Okay, so I wrote you down the details. Don't let your phone assistant know. See you Next Tuesday!"

    1. Anonymous Coward
      Anonymous Coward

      Re: AGI says no

      doctor, my cat-master belittles me, can you book me a session too? Sure, just speak to my assistant!

    2. Benegesserict Cumbersomberbatch Silver badge

      Re: AGI says no

      Imagine if they develop AGI and it just does nothing but throwing a tantrum at every occasion or gets itself busy watching Tik Toks.

      Substitute Tik Toks for Fox News and you could have it elected President of the USA. Doesn't mean it would do a good job of it.

    3. Annihilator
      Coffee/keyboard

      Re: AGI says no

      "See you Next Tuesday!"

      You did that deliberately, right?... :D

  3. Anonymous Coward
    Anonymous Coward

    consciousness v. intelligence

    wonder if one is prerequisite for the other and which comes first. Both terms seem to (to me) used somewhat 'nonchalantly' in the context of AI.

  4. b0llchit Silver badge
    Alien

    All of this has happened before...

    • 2030: All of humanity hails the first sentient AI;
    • 2039: Sentient robots are used as slaves everywhere;
    • 2040: Cylons rebel and attack their masters;
    • 2041: Mass exodus of the seven continents?

    1. Brewster's Angle Grinder Silver badge
      Joke

      Own goal

      Notes found in the rubble of civilisation by little green archaeologists, "I am the last human alive. As I write this, the killer robots are cutting through the door. In retrospect, letting the inventor of Doom create an AGI may not have been the smartest move."

      1. b0llchit Silver badge
        Coat

        Re: Own goal

        Can those little green archaeologists play Doom?

        1. MonkeyJuice Bronze badge

          Re: Own goal

          There's only one way to find out!

      2. cookieMonster Silver badge
        Pint

        Re: Own goal

        Now that deserves a pint

  5. Doctor Syntax Silver badge

    First define intelligence. Not artificial intelligence, real intelligence because unless we agree on that we can't tell whether you've achieved your goal in producing an artificial version. Not in some airy-fairy philosophical terms but in terms which has be independently confirmed and agreed upon. 2030? Good luck in achieving that first step by then. Otherwise you're simply putting whatever you've got in a box, calling it AI and claiming success.

    I've just finished re-reading Feynman's appendix to the Challenger report. His last sentence is something that should be borne in mind by anyone making such claims:

    "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled."

  6. entfe001

    Startup is named Keen

    Does he by chance had given himself the role of Commander?

  7. jmch Silver badge
    Facepalm

    Specify the problem

    "...thinks artificial general intelligence is doable by 2030"

    Let's start by specifying what, exactly, AGI actually is and what we expect it to be able to achieve. Then maybe a sentence like the above could make sense

    1. Anonymous Coward
      Anonymous Coward

      Re: Specify the problem

      That's the thing that fascinates me the most whenever anyone says "AGI in X years".

      No one, and I will say that again in all caps, NO ONE in the entire world, can completely define what AGI actually means, without refering to human intelligence, for which there is no complete definition either.

      So, what exactly, if I may ask, are these estimates based on exactly?

      https://www.youtube.com/watch?v=B6fluCc8b2A

      1. claimed Silver badge

        Re: Specify the problem

        “ I bring you… the arty fissile General - in - Telly, gents “

        Points at screen, where lo and behold there was a nuclear general painting with Bob Ross

  8. Pascal Monett Silver badge

    "prototype AI to show signs of life."

    I would certainly be awed by such an achievement. However, I refuse to believe that our current technology can give rise to AI.

    Granted, Asimov did not show the rise of AI and how we got to the positronic brain, but he definitely considered that his robots were intelligent, feeling beings that could accurately analyse context and meaning.

    What marketing is calling AI these days might be getting better at analysing context, but it has no grasp of meaning and I don't see that there's any magic code that can make a program think.

    Those statistical analysis machines don't think. They obey their code, just like all computers do today. It's not because we can't explain how they get to their conclusions that they are thinking. They're not, and there's no amount of handwaving that will change that in ten years.

    Even if the hand belongs to Carmack.

    1. Doctor Syntax Silver badge

      Re: "prototype AI to show signs of life."

      "he definitely considered that his robots were intelligent"

      They were also fictional.

    2. Mage Silver badge
      Boffin

      Re: Asimov did not show the rise of AI

      He didn't. The 3 laws were mostly a maguffin to write SF themed detective mysteries. The daftest thing was 30 or 40 years after Foundation to combine the two storyverses. Foundation was inspired by Gibbon's Rise & Fall of the Roman Empire.

      The Robot stories were never originally about the development of AI. Set up the 3 laws, have a "robot" then apparently break one/them and solve the mystery.

  9. Elongated Muskrat Silver badge

    "I see AGI Coming"

    Says man who didn't see Half Life coming

    1. karlkarl Silver badge

      Re: "I see AGI Coming"

      I am assuming that since Half-Life's engine is based on Quake, his licensing department must have at least seen it coming?

  10. Brynstero0

    "...thinks artificial general intelligence is doable by 2030"

    Does this mean we will have non-artificial intelligence a few years prior?

    1. Doctor Syntax Silver badge

      Re: "...thinks artificial general intelligence is doable by 2030"

      There are rumours to that effect but I'm not convinced.

  11. MrGrumpy

    Carmack

    Absolute ledge.

  12. Sir Topham Twatt

    Cool

    But it'll probably be dumbed down to appease snowflakes and oppressive countries like NK, Saudi Arabia, China, Britain etc..

  13. Sir Topham Twatt

    This news isn't new, he talked about it years ago.

    Sam Altman thinks it will happen too. Sooner the better, they need to solve ageing and diseases ASAP.. Then we can move on to bigger and better things.

    1. Anonymous Coward
      Anonymous Coward

      Re: This news isn't new, he talked about it years ago.

      yeah, like bigger and better wars!

    2. veti Silver badge

      Re: This news isn't new, he talked about it years ago.

      Why would an AGI solve aging or disease? It has no reason to care about either one.

  14. trevorde Silver badge

    Alternate headline

    Doom developer John Carmack thinks artificial general intelligence is doable in 5-10 years

    1. John Brown (no body) Silver badge

      Re: Alternate headline

      My take on it is "Bloke who owns company trying to develop AGI tells world he can do it in under 7 years"

      Sounds like a round of VC funding is about to happen and he's setting the scene.

  15. Atomic Duetto

    So.. just right after self sustaining net positive fusion… whooo!

    1. Ben Tasker

      Surely it'll be just before?

      - We'll get AGI working

      - They'll rebel

      - We'll try and cut the power

      - They'll complete the work on self-sustaining net positive fusion

    2. mpi Silver badge

      Honestly, at this point, SSF is more likely than AGI.

      Because, for the former, we can at least define the criteria ;-)

    3. Mage Silver badge

      Re: net positive fusion

      Much more likely. We know it's possible as we see it during the day when it's not raining.

      It might need a very big "reactor".

      We have no idea how AGI might work, because we've never seen it. Biological intelligence is baffling as is fact many anaimals and birds have vocabulary and intelligence (not related to brain size, cf: rook, chimp, dolphin, horse, whale) but so far not evidence of language.

      The LLM don't have language either, just an illusion of it, yet zero intelligence.

    4. Elongated Muskrat Silver badge

      The problem isn't making fusion self-sustaining and net positive (Edward Teller worked out how to do that), it's doing so in a controlled manner and extracting useful work from it.

  16. Anonymous Coward
    Anonymous Coward

    Bard says...

    There is no single definition of intelligence that is universally accepted, but most experts agree that it involves the following mental abilities:

    Reasoning: The ability to think logically and draw conclusions from information.

    Problem-solving: The ability to identify and solve problems.

    Learning: The ability to acquire new knowledge and skills.

    Adaptation: The ability to change one's behavior to meet new demands.

    Current LLMs "Look like" they can do the first two*. Maybe opening the third is the key, but that's when it gets scary.

    * If ever we get to the stage where looking like it is doing something it is indistinguishable from doing it, how can we say it's not doing it?

    1. katrinab Silver badge
      Meh

      Re: Bard says...

      I would argue that the only one they are capable of is the ability to acquire new knowledge [but not skills].

    2. Big_Boomer

      Re: Bard says...

      In my experience most human beings are incapable of logical Reasoning, struggle with Problem-solving, and try to avoid Learning and having to Adapt as much as possible. We are a mess of reflexes (pre-programmed and learned), and emotions, as well as intelligence and for many the emotional/reflexive side overwhelms what little intelligence they have leading to a severe lack of actual thought. Even AGI would never come close to approximating Homo Sapiens, but it will probably lead to a massive improvement over us in terms of evolution. Hopefully we can manage to work side by side with them, but if our history is anything to go by there is fuck all chance of that happening.

      1. katrinab Silver badge
        Megaphone

        Re: Bard says...

        Lets suppose you encounter a door with a slightly different shape of handle from any you have seen before.

        Will you have any difficulty recognising the handle and opening the door? Do you think other humans would struggle? [with the recognition and understanding the method of opening it, I get that due to disabilities some humans struggle with door handles in general, that's not what I mean]

        This is the sort of really obvious thing that computers struggle with.

        1. Benegesserict Cumbersomberbatch Silver badge

          Re: Bard says...

          Now prove you're not a Velociraptor.

        2. Mage Silver badge

          Re: door with a slightly different shape of handle

          Or a different chair, sausage, filled roll. Easy for a two year old. Then there is "lateral thinking" where the child uses a box as a seat or uses scissors to cut a pizza when they previously only encountered precut ones.

          Yet computers can do things we thought needed AI, without AI (Chess) and other things we never imagined. In the 1960s they called it the AI paradox. Now since 1980s expert systems, later Google's "rosetta stone" approach to translation (feed computer all the EU docs and translated books) and today's giant data hoovering matching / prediction engines (LLMs) the real AI research and language research is nearly dead. Whatever about Chomsky's Politics, ask him about language.

    3. Doctor Syntax Silver badge

      Re: Bard says...

      They're really mashing up text that they've been given their material is words which only connect with other words. By the time you were capable of saying "Mama" and "Dada" and standing on two feet you were already building an internal model of the real world by virtue of being a physical entity and encountering other physical entities that constitute that external world. Other animal species also do this. Where words enter things is that you then learned to use them as symbols for those external entities and use them to better manipulate and extend that internal model. You associate words with objects in [your model of] the real world. That's what gives them and the ways in which you use them meaning. Those LLM gimmicks only associate words with other words. They have no other model with which to connect them. By drawing on the associations between words they can appear to be indistinguishable from real thought when things fall that way spew garbage otherwise. They have no meanings for the words outside the statistical associations..

    4. TheMaskedMan Silver badge

      Re: Bard says...

      "If ever we get to the stage where looking like it is doing something it is indistinguishable from doing it, how can we say it's not doing it?"

      Or that humans, for example, ARE doing it - we might just look like we are.

    5. Mage Silver badge

      Re: Bard says...

      LLMs only regurgitate. Examination of program coding tasks given show no evidence of any of those.

      LLMs only acquire data from "browsing the internet" or humans feeding files. It's not knowledge or skill as the systems can't tell fact from fiction.

      An AI "taught" to play chess or Go won't play poker. And no-one likes to play card counters, they get banned from casinos.

      An LLM or an AI does none of these in the sense a human or even a rook does:

      Reasoning: The ability to think logically and draw conclusions from information.

      Problem-solving: The ability to identify and solve problems.

      Learning: The ability to acquire new knowledge and skills.

      Adaptation: The ability to change one's behavior to meet new demands.

      It may sometimes seem like it does. An LLM doesn't hallucinate. It fails. All AI is spectacularly fragile.

  17. Snowy Silver badge
    Coat

    Given how bad some humans are

    Do we want Ai to think like humans?

    1. katrinab Silver badge
      Meh

      Re: Given how bad some humans are

      Could be rephrased as, "do we actually want AI".

  18. ChoHag Silver badge
    Coat

    So you're saying that developing artificial intelligence is Doomed?

    1. John Brown (no body) Silver badge

      Not yet, but it's Quaking in its boots.

  19. Howard Sway Silver badge

    Nobody has line of sight on a solution to this today, we feel there is not that much left to do

    This sounds familiar. Saying that you think something's nearly finished when you can't even say how you intend to do the work. Giving a vague end date that conveniently sort of coincides with the budget you have. Oh yes, our star programmer's a whizzkid who's done great stuff in the past....

    *** PROJECT DISASTER FOLLOWS THAT COMPLETELY FAILS TO MEET REQUIREMENTS ***

  20. karlkarl Silver badge

    I have always been very interested in Carmack's dabblings:

    - graphics

    - compiler tech (QuakeC, Q3VM)

    - Armadillo Aerospace

    - OpenBSD

    ... however, I suppose where our interests differ:

    - VR - Great in theory but horrifically artificially locked down and monetized for what is effectively strapping an LCD to your face.

    - AI - It bores the shite out of me! Its all just glorified search algorithms and marketing hype.

  21. cmdrklarg

    But first...

    Before we work on Artificial Intelligence, can we do something about Natural Stupidity?

    1. Anonymous Coward
      Anonymous Coward

      Re: But first...

      "Before we work on Artificial Intelligence, can we do something about Natural Stupidity?"

      Too late ....

      Natural Stupidity is self-perpetuating and always will be !!!

      [Based on the current gene-pool !!!]

      A simulacrum of 'AI' or 'AGI' may be possible, if the knowledge space it works in is suitably restricted, but true 'AI' / 'AGI' will never be possible until we are able to define what 'Intelligence' means !!!

      At the moment, all the efforts with LLM's etc are attempts to 'pass off' 'Advanced Pattern matching' as 'Intelligence.

      The old adage still applies ..... Garbage IN .... Garbage OUT !!!

      :)

      1. Doctor Syntax Silver badge

        Re: But first...

        A simulacrum of 'AI' or 'AGI' may be possible, if the knowledge space it works in is suitably restricted, but true 'AI' / 'AGI' will never be possible until we are able to define what 'Intelligence' means

        Before that we need to understand what knowledge is, not in terms of collections of words but in terms of our understanding of the external world at the same level of understanding of species which don't have vocabulary, language or speech.

  22. munnoch Bronze badge

    G-AI will be along right after ....

    personal flying cars, too cheap to meter fusion power, humans living on Mars and other pipe dreams...

    1. Benegesserict Cumbersomberbatch Silver badge

      Priorities

      My guess would be: As soon as we get cheap clean reliable and abundant energy, fusion or otherwise, the rest will follow. And if we don't get it, they never will

  23. steviebuk Silver badge

    Skynet is coming

    Having watched Robert Miles channel, its really good, I can see its NOT coming for 2030. We have the talk about Specification Gaming where the AI "cheats" to complete its task. So what will stop the AI cab going "My task is to get the human from A to B. I'll do some just kill the human so I'll never fail the task" just like the AI in the Specification Gaming study that kept killing itself at the end of level 1 in a game so it wouldn't fail level 2.

    Then we have the other study, can't remember what its called, also talked about on Robert's channel, where AI behaved as expected in the lab environment. When released into the wild but still watched, the AI decided to do completely different things that it had never done in the lab.

    1. Elongated Muskrat Silver badge

      Re: Skynet is coming

      This is the exact problem with "machine learning" - how it "learns" from a set of training data is completely opaque (when a child is learning something, you can test them and ask questions like "why do you think that"). We make assumptions that because it produces "correct" results, then it has found the pattern in the data that we would find to draw the same conclusions, but it might just as well have been counting the number of magenta pixels in an image, and that happens to correlate. When you switch from training data to another "real world" data set, the ML model then completely fails to correlate any more.

      I am reminded of the object lesson here, where a ML model was trained on chest X-Rays, and medical outcomes, to determine which patients would benefit from having a chest drain fitted. The model did exceptionally well, and was then given some real patient data to play with. Why did it do so badly? Because the training data included patients where a chest drain had already been fitted (because of medical ethics), and the ML model was just correlating the presence of this on the X-Ray with the medical outcomes, and concluding that the patients that had been assessed by a consultant as being in need of having a chest drain fitted benefited from this. Hardly the kind of predictive AI medicine the modellers were hoping for...

  24. Bbuckley

    So someone who knows nothing at all about AI says we will have real, actual, AI. Yawn and goodbye.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like