back to article OpenAI snaps up role-playing game dev as first acquisition

OpenAI has acquired its first company, Global Illumination, creators of an online role-playing game that has been compared to Minecraft. The financial details of the deal were not disclosed. OpenAI announced the acquisition in a short statement, and said the entire Global Illumination team, which appears to be made up of …

  1. Pascal Monett Silver badge
    Flame

    Stop misusing that term

    "Maybe it's possible they can become more intelligent if they can embody and experience a digital one"

    No. Stop encouraging people to think that statistical analysis machines have any intelligence at all.

    It's not because we have no idea how they get to their conclusions that there is a iota of intelligence inside the box. There isn't.

    It's just mathematical rules coming together in some way that confuses people. Besides, mention statistics and 90% of the room is already asleep, drooling.

    You wouldn't say that a movement detector is intelligent because it detects someone entering and turns on the light ?

    What you're calling AI is no different from the movement detector. The movement detector just has less circuitry.

    1. FeepingCreature

      Re: Stop misusing that term

      We don't understand how intelligence works. That means, frankly, we have no idea whether or not the LLMs are intelligent. It could well be - in fact, it seems plausible to me - that all intelligence, human included, is based on "statistical analysis" and "mathematical rules coming together".

      1. TheMaskedMan Silver badge

        Re: Stop misusing that term

        Exactly. Nothing, intelligence included, simply exists. It must be the product of physical processes which, once understood, will no doubt be subject to mathematical description. Replication will follow, and at that point we will have artificial intelligence.

        What we have now is an attempt to replicate the result without understanding the means by which it is produced - a sort of cargo cult intelligence.

        Of course, there is a slim chance that we could accidentally hit upon the required method through trial and error. You could also argue that if it looks like a duck, and quacks like a duck, it doesn't really matter whether it's really a duck. Unfortunately, impressive though they are in their own way, tools like chatGPT may have a certain duck-like appearance, but tend to randomly quack like a cow.

        We're not there yet, and I doubt we will be until organic intelligence is understood. But when it is, I strongly suspect that statistical analysis will be a large part of the process.

        1. Andrew Hodgkinson

          Re: Stop misusing that term

          Yeah, it's definitely not intelligence.

          ChatGPT was famous for producing maths such as "2+2=5", along with the usual bland, yet verbose "explanation" of why it was correct. It was all gibberish, of course. Why does it make this mistake? Because it doesn't know what "2" is, or what "+" is, or what "=" is, or what "5" is. It doesn't know what numbers are. It doesn't know any of the rules of mathematics at all. It has no idea what right or wrong are either, so it can't know that it is in error (even if told as much, unless it has a means to understand what being wrong means and why it was wrong when it's told that, which it does not and that's why it'll often argue back - it's just stats-matching training set text from when some people told some other people that they were wrong. Ever seen online "discussions"? When someone says "you're wrong", someone else pretty much always argues back).

          The reason it might assert 2+2=<anything> is because that's a maths-y thing which looks statistically like other maths-y things and a lot of the maths-y things which had "2+2" in them said "4". But sometimes people say stuff like, "hey that's nonsense, it's as wrong as saying 2+2=5". And thus, we have "2+2=5" in the training data now, so there's this small stats-based chance (based on billions of other bits of input and nuances that are beyond our own ability to reason about simply because of the vastness of the data set) that the ML system might, indeed, state "2+2=5".

          It's a stochastic parrot, full stop. No matter how many times people hand wave and say "we don't know what intelligence is", that's just deflection. We certainly do know that part of our intelligence is based around knowing rules and understanding them and, indeed, earlier AGI studies (1970s-90s era or thereabouts, then just "AI") were often based around trying to teach rules and gain inference from those. A person knows what an integer is, the rules governing those and what addition means and so knows, without a shadow of a doubt, that 2+2=4, because the person understands the governing rules and nature of every part of that statement... Once taught those rules, that is! The trouble is, a lifetime of learning rules turns out to be very, *VERY* hard to do even with modern computing power - the biggest problem, I think, is assembling a machine-readable training set of such accuracy and detail in the first place, rather than creating a computer system capable of processing that data.

          But, good news! We discovered a party trick. Enter generative AI, AKA ML.

          Even OpenAI themselves acknowledge that ChatGPT is indeed a party trick - that it only gives right answers by accident, readily makes up nonsense and should never be used for anything that requires correct answers, but never let a product's limitations get in the way of the lies of marketing and the holy grail of sweet, sweet profit. Microsoft have a whopping great big share in OpenAI, so - surprise! Suddenly ChatGPT is in front of Bing, a search engine that's supposed to give accurate answers. The tsunami of stories early on about how Bing was, subsequently, frequently returning rubbish was an inevitable outcome. It'll still be doing it, helping to misinform and worsen misinformation problems globally, but it's all old news now so you don't hear about it.

          We can carry on refining this junk, at least so long as there's ever-more *human*-generated content online to teach upon, but it'll still be lipstick on a pig. Like the fun artificial landscape generators of the past such as Terragen, or entertaining old-school "human-like chat" bots such as Eliza way-back, it'll still hit its limit. Interestingly, with ML-generated material now spewing out over the web like a broken sewer main over a highway, actually finding new human-authored stuff to add to existing ML model training datasets has become an awful lot harder than it was. We might already be quite close to the peak of capabilities of these systems as a result.

          1. very angry man

            Re: Stop misusing that term

            I couldn't read more than the first paragraph, sounds like you were describing every politician that I have ever heard

          2. FeepingCreature

            Re: Stop misusing that term

            All of the counterexamples you gave are of course perfectly normal human behavior. We ask a question, we guess an answer, we confabulate a justification right or wrong. If you say you've never seen a human do that, I claim you've forgot school. Inasmuch as GPT seems to do it more, and more aggressively than humans, I submit it's because it doesn't "know itself" - it "thinks" it's a human, and it thinks humans instinctively give correct answers to big math problems. (The part where we stop typing and grab a pocket calculator, unfortunately, is not part of the training material.)

            It doesn't have a feel for its own mind, and it can't recognize "the sorts of mistakes it makes". How could it? That isn't part of its training. All the problems we're seeing with LLMs are a direct downstream consequence of the approach of training with a giant human-produced corpus.

          3. Kimmie

            Re: Stop misusing that term

            So your argument is that ChatGPT has a better understanding of math than a Kindergartener, but less than a 5th grader. Gotcha

        2. very angry man

          Re: Stop misusing that term

          We're not there yet, and I doubt we will be until organic intelligence is understood. But when it is, I strongly suspect that statistical analysis will be a large part of the process.

          Organic intelligence, while not understood, is hunted down and beaten out of any who possesses it early on in the educational system

        3. FeepingCreature

          Re: Stop misusing that term

          I guess my brain tends to produce cowlike thoughts at a sufficient rate that I'm skeptical of the idea that the occasional presence of a moo means a system is not a duck, er, human, er, intelligence.

          I don't think GPT is done, and I don't think it's on a human level. But it seems plausible to me that it's "the right sort of thing" for a general intelligence. When I imagine looking back on the first AGI in the future history books, I'd expect it to differ from current approaches primarily in training and evaluation, not model design.

          1. that one in the corner Silver badge

            Re: Stop misusing that term

            > When I imagine looking back on the first AGI in the future history books, I'd expect it to differ from current approaches primarily in training and evaluation, not model design

            Whereas I expect to see the inverse!

            I really hope to see models moving away from piles of nadans and embracing mechanisms with explanatory power and use of same for introspection.

            Training by reading huge gobs of material is ok, so long as the appropriate meta data is included (e.g. "this is a text book, we accept it is generally correct about its subject material" and "this is a novel, its intent is entertainment").

            1. Anonymous Coward
              Anonymous Coward

              Re: Stop misusing that term

              "Whereas I expect to see the inverse!"

              Sir, sir! They are fighting again!

            2. that one in the corner Silver badge

              Re: Stop misusing that term

              > Training by reading huge gobs of material

              Legally acquired material only!

              (Too late to edit but this isn't an afterthought, honest, I'd never read an illicit book copy, really, cross my heart, never even heard of bittorrent!).

    2. that one in the corner Silver badge

      Re: Stop misusing that term

      Preface: as expressed before, I think that LLMs are neato tech demos but are not a good approach for the problems that they being "applied" to (just chucking a chatbot over the fence is hardly a carefully thought out application of - anything).

      > What you're calling AI is no different from ...

      As noted before, "if we know how to it then it is no longer a question of AI". Programmed behaviours that were once the subject of deep thinking in the AI research groups are now just boring day to day (face detection in your camera, playing chess...).

      All you are doing is trying to (re)define what the term "AI" means and then argue (strongly) against LLMs based on your interpretation. Not sure precisely what your interpretation is, but it certainly appears to lie very close to the "hard AI" (as it was called) end of the scale - possibly wanting it to satisfy the "General AI" criteria (whatever they may be today)? Not sure.

      Anyway, redefining terms is a trick anyone can do.

      Yours appears to be based (as implied above) on fixating upon the word "intelligence" and *where* it is applied: specifically, on the "deep" content of the LLMs replies. As opposed to, say, its ability to (usually) generate coherent text, in a variety of tones and topics, ignoring what it was actually rambling on about (an artificial Grandpa Simpson?).

      Having "an AI" inside a program released to the public has never, so far, actually included a GAI, but that doesn't mean it doesn't contain AI. Coding AI Opponents into video games doesn't warrant angry letters to the editor (or does it? I may be entirely out of touch!).

      Even OpenAI are not claiming they have a GAI on their hands. The Register certainly isn't.

    3. Michael Wojcik Silver badge

      Re: Stop misusing that term

      It's just mathematical rules coming together in some way that confuses people.

      Cool. Now support the warrant, that this is qualitatively different from how human intelligence works.

      I am not impressed by LLMs myself, but this dualist "I don't know what intelligence is, but it's not this" line you insist on repeating is just sophomoric bullshit. Could you please either come up with a substantive argument, or just cut it the fuck out? We've all read it a hundred times now.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like