Those that look at LLMs like ChatGPT and decide AGI is 'only a few years away' are like a child that has seen a conjuror produce a coin from behind their ear and thinks they have found the solution to the national debt.
Doom developer John Carmack thinks artificial general intelligence is doable by 2030
Legendary software developer John Carmack, who gave the world the first-person shooter, thinks it's likely an artificial general intelligence (AGI) could be shown to the public around the year 2030. Carmack shared his view at an event for the announcement [Video] that his AGI startup Keen has hired Richard Sutton, chief …
COMMENTS
-
-
-
-
Tuesday 26th September 2023 17:12 GMT Phil O'Sophical
I'll need a midget, a large box, two laptops, Discord and an internet connection.
See the Mechanical Turk
-
This post has been deleted by its author
-
-
Tuesday 26th September 2023 08:33 GMT Anonymous Coward
I agree with Carmack here. It's doable.
LLMs are a product of limited hardware resources...it's the max that is possible with the technology available. What we have now does not represent the cutting edge of theoretical AI...what we have now is what is possible with a shit load of GPUs for training and the available resources on an average desktop. The training is limited to the sheer number of GPUs you can afford to buy and house in a single location...the usage of LLMs is limited to what is available on commodity servers...the existing solutions reek of compromise and trade offs...
All Carmack needs to do is figure out a way to train more complicated models with fewer resources, and produce models that don't require as much VRAM.
These are two problems he is very capable of solving as he has a long history of getting a lot out of not much. I think this is what Carmack can see...in the same way he saw smooth graphics and 3D rendering on machines in the late 80s / early 90s when nobody else could.
Whatever he does here, will be interesting and potentially ground breaking. The investment so far seems tiny, but given his background...it's actually massive.
-
Tuesday 26th September 2023 09:06 GMT Andy 73
Not bigger... not monolithic.
That sounds dangerously like "if we just make it bigger, it will generalise". I don't think that's true, and I don't think that's what Carmack is doing.
Meanwhile, a lot of the AI hype companies are focussing purely on making it bigger - because that's a thing they can explain to investors, because it's a convenient moat around their business, and because large models do indeed produce "finer grained" output which looks like progress. I think they are making better tools, but that's not the same as AGI.
-
Tuesday 26th September 2023 19:06 GMT DS999
Re: Not bigger... not monolithic.
They did "just make it bigger" with ChatGPT 4 vs 3, but it got dumber in a lot of ways with 4 so it isn't as if making it bigger improves everything. In some ways it really improved, and in others there was a major regression. And they don't know why.
I agree they will need a completely different approach to achieve AGI. What we have now is basically a giant inference machine, which makes it better than previous claims of "AI" but still nothing like human level intelligence. The fact that no one can really define what it is that makes us smarter than ChatGPT is the biggest obstacle. Since we don't know how a human thinks, we're just applying brute force and hoping throwing enough millions of dollars and enough megawatts of power into a pile of computational resources will reach some tipping point and become self aware.
You just have to look at the progression of a child to realize that the massive volumes of information being dumped into LLMs are not the way to achieve intelligence. Unless toddlers have a hidden link to the entire corpus of the internet I am unaware of.
-
Friday 1st December 2023 01:28 GMT Old Handle
Re: Not bigger... not monolithic.
That might not be too far from what Richard Sutton believes. He wrote The Bitter Lesson, where he argued that history shows AI advances always come more from computing power than human ingenuity.
-
-
-
Tuesday 26th September 2023 16:48 GMT Anonymous Coward
"The problem with LLMs is that they are language models, and not knowledge models. Doesn't matter how big you make them."
Exactly. A comment I heard recently was that LLMs are designed to generate *content* (which they do remarkably well), not answers or information (which they may accidentally provide from time to time).
-
-
-
Tuesday 26th September 2023 16:46 GMT katrinab
I believe it is not possible to do it on a Turing machine. Obviously it is possible to create new intelligent beings, it is called having babies, but it is impossible to preduct whether it will be possible at some point the the future to do it another way outside of the more obvious techniques in the biology lab.
-
Tuesday 26th September 2023 19:09 GMT DS999
You're basically arguing from a religious perspective, that humans (or biology for a "little g" god like Gaia) are special. I think it is ridiculous to claim it is not possible to do on a Turing machine. We don't have any idea how humans think, but we don't have any reason to believe it is something magical a machine cannot emulate.
-
-
-
Wednesday 27th September 2023 05:44 GMT katrinab
I am aware there is zero proof.
The only way to prove it one way or ther other is to either get a Turing Machine to emulate it, which doesn't appear to be happening, or to gain a better understanding of how the human brain works which also isn't happening, and demonstrate that it relies on a feature that isn't offered by a Turing machine.
-
-
-
-
-
Tuesday 26th September 2023 10:43 GMT Anonymous Coward
>All Carmack needs to do is figure out a way to train more complicated models
No. The path between LLM and AGI is not one of scale or complexity. The underlying MOs have nothing to do with each other. How do I know that: Because we don't even have the MO of an AGI. As a matter of fact, we don't even have a definition what "intelligence" actually means.
> as he has a long history of getting a lot out of not much
So does my grandmother, who could cook a 3 course meal for 4 people from ingredients that she paid less than 20$ for. That doesn't mean shes qualified to solve all problems remotely related to efficiently use resources.
-
-
Tuesday 26th September 2023 18:46 GMT Anonymous Coward
“LLMs are not AI, on any level, and cannot become one.
Why can people not understand this?”
People understand it perfectly, which is why the AI hype machine is so determined to anthropomorphise the language around LLMs (“hallucinate” versus “fail,” “training” instead of “ingesting”).
LLMs can’t become AI the same way cryptocurrency can’t become a currency. Not in practice, but functionally as long as your marks believe that you sell them.
People have been anthropomorphising machines since admins first had their word professors replaced with IBM PCs, so it isn’t a difficult trick to play.
-
Tuesday 26th September 2023 22:44 GMT Doctor Syntax
I think there are multiple ways to fail so you need a more varied vocabulary to specify the particular failure. Pressing existing words into service is a well accepted way of doing this. Do you also complain that motor vehicles are being anthropomorphised by referring to a human gesture (clutch) or piece of anatomy (steering arm)?
-
-
-
Tuesday 26th September 2023 12:25 GMT Anonymous Coward
In the Turing test that the prescribed thinking is that if you can't tell after 20 questions if the respondent is human or not, then its' a win for the "AI".
In the Chinese Box scenario, the "box" doesn't speak Chinese it only appears to from the code/program internally.
LLMs definitely use algorithms to determine probable answers... All questions/chats are reduced to indexed numbers for each word to make the algorithm function easier.
Yes, there is emergent behaviour with size such as being able to pass the American Bar exam in ChatGPT 4.x but not 3.x
But size doesn't appear to change the underlying concept that it doesn't understand what any of the words mean only that they go together.
They can only pass the Winograd Schema questions if the specific examples exist in their training data because it doesn't know what the words are or mean, just their relationship in sentences....
It's knows no more about it's content than a dog that is trained to deliver the Economist for me, the Guardian for my wife and 2000AD comic to my son.
Of course ChatGPT is delivering individual words, not entire newspapers but it still doesn't know what the words are.
Yes, it can create hip hop lyrics based on patterns in existing training data.
It can use those patterns to create different unheard of before lyrics.
But it won't be able to make new patterns, or a new style of music or a new concept in a science fiction book.
Nothing ground breaking or evolutionary.
But then again, perhaps that's what we mean by a General AI.... It's not a Polymath AI. Now that would be a wonder.
-
-
Tuesday 26th September 2023 16:50 GMT Doctor Syntax
That's because the Bar Exam is a series of questions which have been answered many times in the past. Train up on that and there is existing material to answer Bar Exam questions available to be mashed together and regurgitated. Require the preparation of documents for a new case and there is no material which has been prepared for the case so it has to provide a pastiche of the sort of papers it has been asked to prepare without any real guidance of what should be said.
-
-
Tuesday 26th September 2023 21:09 GMT Mage
Turing Test
The Turing Test was a sort of idea by someone who knew little about how intelligence works. The Turing Machine was good work, but it was mathematics. The Turing Test idea was purely speculative.
At best it's a test of human naivety. See reaction to Eliza, Parry, Racter, ALICE and ChatGPT. A rook can't pass the Turing test, yet they are very intelligent.
It's often plausible junk.
-
-
Tuesday 26th September 2023 18:50 GMT graeme leggett
To quote a neurologist and science communicator
"These LLM systems do not think, and are not on the way to general AI that simulates human intelligence. They have been compared to a really good auto-complete – they work by predicting the most likely next word segment based upon billions of examples from the internet. And yet their results can be quite impressive."
https://sciencebasedmedicine.org/update-on-dr-ai/
-
Wednesday 27th September 2023 09:34 GMT Annihilator
"LLMs are a product of limited hardware resources"
In otherwords, if we throw more and more monkeys at the problem, then as we approach infinite monkeys eventually we'll get Shakespeare.
"he has a long history of getting a lot out of not much"
I mis-read that as "he has a long history of not getting out much", which is probably true as well.
-
Wednesday 27th September 2023 15:46 GMT Blade9983
Everything sounds easy if you boil complexity down to a simple statement.
Cold fusion is easy all we need to do if figure out how to trigger a fusion even with minimal power input. And make that reaction controlled.
These are two problems he is very capable of solving as he has a long history of getting a lot out of not much.
-
Sunday 15th October 2023 12:59 GMT Dacarlo
"The training is limited to the sheer number of GPUs you can afford to buy and house in a single location..."
I'm waiting for someone to figure out how to make an AI that runs over a distributed mesh, similar in notion to SETI/BOINC. A truly nebulous and massively complex AI entity may be possible then. We could call it Skynet ;)
-
-
-
Tuesday 26th September 2023 08:05 GMT elsergiovolador
AGI says no
Imagine if they develop AGI and it just does nothing but throwing a tantrum at every occasion or gets itself busy watching Tik Toks.
They will also have to develop a way to administer it with virtual drugs, so it can keep being focused, "happy" and less shy.
I think therapists have a bright future.
At phone repair shop: "My phone assistant gave me wrong directions and it now keeps belittling me. I have really low self esteem now. Is it true that my face looks like Picasso's botched work?"
Repair person: "Don't worry, your face is just fine! This looks like a job for our phone therapist. I can schedule an appointment for your phone on Tuesday, does that work for you?"
Client: "Of course! Thank you so much!"
Repair person: "Okay, so I wrote you down the details. Don't let your phone assistant know. See you Next Tuesday!"
-
Tuesday 26th September 2023 09:13 GMT Doctor Syntax
First define intelligence. Not artificial intelligence, real intelligence because unless we agree on that we can't tell whether you've achieved your goal in producing an artificial version. Not in some airy-fairy philosophical terms but in terms which has be independently confirmed and agreed upon. 2030? Good luck in achieving that first step by then. Otherwise you're simply putting whatever you've got in a box, calling it AI and claiming success.
I've just finished re-reading Feynman's appendix to the Challenger report. His last sentence is something that should be borne in mind by anyone making such claims:
"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled."
-
-
Tuesday 26th September 2023 10:54 GMT Anonymous Coward
Re: Specify the problem
That's the thing that fascinates me the most whenever anyone says "AGI in X years".
No one, and I will say that again in all caps, NO ONE in the entire world, can completely define what AGI actually means, without refering to human intelligence, for which there is no complete definition either.
So, what exactly, if I may ask, are these estimates based on exactly?
https://www.youtube.com/watch?v=B6fluCc8b2A
-
-
Tuesday 26th September 2023 10:18 GMT Pascal Monett
"prototype AI to show signs of life."
I would certainly be awed by such an achievement. However, I refuse to believe that our current technology can give rise to AI.
Granted, Asimov did not show the rise of AI and how we got to the positronic brain, but he definitely considered that his robots were intelligent, feeling beings that could accurately analyse context and meaning.
What marketing is calling AI these days might be getting better at analysing context, but it has no grasp of meaning and I don't see that there's any magic code that can make a program think.
Those statistical analysis machines don't think. They obey their code, just like all computers do today. It's not because we can't explain how they get to their conclusions that they are thinking. They're not, and there's no amount of handwaving that will change that in ten years.
Even if the hand belongs to Carmack.
-
Tuesday 26th September 2023 21:16 GMT Mage
Re: Asimov did not show the rise of AI
He didn't. The 3 laws were mostly a maguffin to write SF themed detective mysteries. The daftest thing was 30 or 40 years after Foundation to combine the two storyverses. Foundation was inspired by Gibbon's Rise & Fall of the Roman Empire.
The Robot stories were never originally about the development of AI. Set up the 3 laws, have a "robot" then apparently break one/them and solve the mystery.
-
-
Tuesday 26th September 2023 21:23 GMT Mage
Re: net positive fusion
Much more likely. We know it's possible as we see it during the day when it's not raining.
It might need a very big "reactor".
We have no idea how AGI might work, because we've never seen it. Biological intelligence is baffling as is fact many anaimals and birds have vocabulary and intelligence (not related to brain size, cf: rook, chimp, dolphin, horse, whale) but so far not evidence of language.
The LLM don't have language either, just an illusion of it, yet zero intelligence.
-
Tuesday 26th September 2023 12:41 GMT Anonymous Coward
Bard says...
There is no single definition of intelligence that is universally accepted, but most experts agree that it involves the following mental abilities:
Reasoning: The ability to think logically and draw conclusions from information.
Problem-solving: The ability to identify and solve problems.
Learning: The ability to acquire new knowledge and skills.
Adaptation: The ability to change one's behavior to meet new demands.
Current LLMs "Look like" they can do the first two*. Maybe opening the third is the key, but that's when it gets scary.
* If ever we get to the stage where looking like it is doing something it is indistinguishable from doing it, how can we say it's not doing it?
-
Tuesday 26th September 2023 15:02 GMT Big_Boomer
Re: Bard says...
In my experience most human beings are incapable of logical Reasoning, struggle with Problem-solving, and try to avoid Learning and having to Adapt as much as possible. We are a mess of reflexes (pre-programmed and learned), and emotions, as well as intelligence and for many the emotional/reflexive side overwhelms what little intelligence they have leading to a severe lack of actual thought. Even AGI would never come close to approximating Homo Sapiens, but it will probably lead to a massive improvement over us in terms of evolution. Hopefully we can manage to work side by side with them, but if our history is anything to go by there is fuck all chance of that happening.
-
Tuesday 26th September 2023 16:30 GMT katrinab
Re: Bard says...
Lets suppose you encounter a door with a slightly different shape of handle from any you have seen before.
Will you have any difficulty recognising the handle and opening the door? Do you think other humans would struggle? [with the recognition and understanding the method of opening it, I get that due to disabilities some humans struggle with door handles in general, that's not what I mean]
This is the sort of really obvious thing that computers struggle with.
-
Thursday 28th September 2023 18:04 GMT Mage
Re: door with a slightly different shape of handle
Or a different chair, sausage, filled roll. Easy for a two year old. Then there is "lateral thinking" where the child uses a box as a seat or uses scissors to cut a pizza when they previously only encountered precut ones.
Yet computers can do things we thought needed AI, without AI (Chess) and other things we never imagined. In the 1960s they called it the AI paradox. Now since 1980s expert systems, later Google's "rosetta stone" approach to translation (feed computer all the EU docs and translated books) and today's giant data hoovering matching / prediction engines (LLMs) the real AI research and language research is nearly dead. Whatever about Chomsky's Politics, ask him about language.
-
-
Tuesday 26th September 2023 17:04 GMT Doctor Syntax
Re: Bard says...
They're really mashing up text that they've been given their material is words which only connect with other words. By the time you were capable of saying "Mama" and "Dada" and standing on two feet you were already building an internal model of the real world by virtue of being a physical entity and encountering other physical entities that constitute that external world. Other animal species also do this. Where words enter things is that you then learned to use them as symbols for those external entities and use them to better manipulate and extend that internal model. You associate words with objects in [your model of] the real world. That's what gives them and the ways in which you use them meaning. Those LLM gimmicks only associate words with other words. They have no other model with which to connect them. By drawing on the associations between words they can appear to be indistinguishable from real thought when things fall that way spew garbage otherwise. They have no meanings for the words outside the statistical associations..
-
Thursday 28th September 2023 17:56 GMT Mage
Re: Bard says...
LLMs only regurgitate. Examination of program coding tasks given show no evidence of any of those.
LLMs only acquire data from "browsing the internet" or humans feeding files. It's not knowledge or skill as the systems can't tell fact from fiction.
An AI "taught" to play chess or Go won't play poker. And no-one likes to play card counters, they get banned from casinos.
An LLM or an AI does none of these in the sense a human or even a rook does:
Reasoning: The ability to think logically and draw conclusions from information.
Problem-solving: The ability to identify and solve problems.
Learning: The ability to acquire new knowledge and skills.
Adaptation: The ability to change one's behavior to meet new demands.
It may sometimes seem like it does. An LLM doesn't hallucinate. It fails. All AI is spectacularly fragile.
-
Tuesday 26th September 2023 16:08 GMT Howard Sway
Nobody has line of sight on a solution to this today, we feel there is not that much left to do
This sounds familiar. Saying that you think something's nearly finished when you can't even say how you intend to do the work. Giving a vague end date that conveniently sort of coincides with the budget you have. Oh yes, our star programmer's a whizzkid who's done great stuff in the past....
*** PROJECT DISASTER FOLLOWS THAT COMPLETELY FAILS TO MEET REQUIREMENTS ***
-
Tuesday 26th September 2023 17:15 GMT karlkarl
I have always been very interested in Carmack's dabblings:
- graphics
- compiler tech (QuakeC, Q3VM)
- Armadillo Aerospace
- OpenBSD
... however, I suppose where our interests differ:
- VR - Great in theory but horrifically artificially locked down and monetized for what is effectively strapping an LCD to your face.
- AI - It bores the shite out of me! Its all just glorified search algorithms and marketing hype.
-
-
Tuesday 26th September 2023 22:21 GMT Anonymous Coward
Re: But first...
"Before we work on Artificial Intelligence, can we do something about Natural Stupidity?"
Too late ....
Natural Stupidity is self-perpetuating and always will be !!!
[Based on the current gene-pool !!!]
A simulacrum of 'AI' or 'AGI' may be possible, if the knowledge space it works in is suitably restricted, but true 'AI' / 'AGI' will never be possible until we are able to define what 'Intelligence' means !!!
At the moment, all the efforts with LLM's etc are attempts to 'pass off' 'Advanced Pattern matching' as 'Intelligence.
The old adage still applies ..... Garbage IN .... Garbage OUT !!!
:)
-
Tuesday 26th September 2023 22:55 GMT Doctor Syntax
Re: But first...
A simulacrum of 'AI' or 'AGI' may be possible, if the knowledge space it works in is suitably restricted, but true 'AI' / 'AGI' will never be possible until we are able to define what 'Intelligence' means
Before that we need to understand what knowledge is, not in terms of collections of words but in terms of our understanding of the external world at the same level of understanding of species which don't have vocabulary, language or speech.
-
-
-
Tuesday 26th September 2023 23:50 GMT steviebuk
Skynet is coming
Having watched Robert Miles channel, its really good, I can see its NOT coming for 2030. We have the talk about Specification Gaming where the AI "cheats" to complete its task. So what will stop the AI cab going "My task is to get the human from A to B. I'll do some just kill the human so I'll never fail the task" just like the AI in the Specification Gaming study that kept killing itself at the end of level 1 in a game so it wouldn't fail level 2.
Then we have the other study, can't remember what its called, also talked about on Robert's channel, where AI behaved as expected in the lab environment. When released into the wild but still watched, the AI decided to do completely different things that it had never done in the lab.
-
Thursday 28th September 2023 11:53 GMT Elongated Muskrat
Re: Skynet is coming
This is the exact problem with "machine learning" - how it "learns" from a set of training data is completely opaque (when a child is learning something, you can test them and ask questions like "why do you think that"). We make assumptions that because it produces "correct" results, then it has found the pattern in the data that we would find to draw the same conclusions, but it might just as well have been counting the number of magenta pixels in an image, and that happens to correlate. When you switch from training data to another "real world" data set, the ML model then completely fails to correlate any more.
I am reminded of the object lesson here, where a ML model was trained on chest X-Rays, and medical outcomes, to determine which patients would benefit from having a chest drain fitted. The model did exceptionally well, and was then given some real patient data to play with. Why did it do so badly? Because the training data included patients where a chest drain had already been fitted (because of medical ethics), and the ML model was just correlating the presence of this on the X-Ray with the medical outcomes, and concluding that the patients that had been assessed by a consultant as being in need of having a chest drain fitted benefited from this. Hardly the kind of predictive AI medicine the modellers were hoping for...
-