back to article What if someone mixed The Sims with ChatGPT bots? It would look like this

Chatbots like Google's LaMDA or OpenAI's ChatGPT are not sentient nor that intelligent. Nonetheless, boffins believe they can use these large language models to simulate human behavior inspired by one of the world's most popular early computer games and some AI code. The latest effort along these lines comes from six computer …

  1. ChoHag Silver badge

    Having AIs practice ordering simulcra of humans around? What could go wrong?

    At least they can't pause the world to materialise a wall in front of the toilet.

    1. Jeroen Braamhaar
      Big Brother

      I can't let you do that, Dave.

    2. cyberdemon Silver badge
      Terminator

      Give me your clothes.

    3. Missing Semicolon Silver badge

      "What cat?"

    4. Norman Nescio

      Or open a door to allow a sofa to get round a corner of the stairs...

  2. Anonymous Coward
    Anonymous Coward

    Agents perceive their environment, and all perceptions are saved in a comprehensive record

    this sounds suspicious like... my life?!

    1. david 12 Silver badge

      Re: Agents perceive their environment, and all perceptions are saved in a comprehensive record

      where a carefully crafted conversation could convince an agent of the existence of a past event that never occurred – and hallucination, among other things.

      So just like real life.

      Note: 'hallucination' is a domain-specific re-definition. 'delusion' would probably be closer, but perhaps the choice was deliberate to reduce cognate interference,

  3. Alan Birtles

    When you feel old

    When a game from the year 2000 gets called an "early computer game"

    1. Rich 11

      Re: When you feel old

      I think I've still got a boxful of ZX Spectrum tapes I bought in 1982/3.

      What was that game where the Norse gods were represented as little stick figures running along the bottom of the screen, and you could fight them?

      1. Jamie Jones Silver badge

        Re: When you feel old

        Valhalla!

        Play in your browser here: https://zxart.ee/eng/software/game/adventure/rpg/valhalla/

        1. Rich 11
          Happy

          Re: When you feel old

          Brilliant! Thank you.

      2. Anonymous Coward
        Anonymous Coward

        Re: When you feel old

        I just realized that the time between The Sims and now is longer than the time between Akalabeth and The Sims.

        Just a few short years to go before the time between the current date and The Sims is longer than the time between The Sims and Pong...

        1. John Brown (no body) Silver badge

          Re: When you feel old

          It won't be long before "now" and the first Moon landing will be longer than the time between the first powered flight and the first Moon landing. I wonder if the next people to walk on the Moon will be before that anniversary?

    2. John H Woods

      Re: When you feel old

      23 years ago: The Sims

      23 years before that: Apple II

      1. Michael Wojcik Silver badge

        Re: When you feel old

        I remember playing that Track & Field game on the Apple //, furiously mashing keys...

        I should dig up code for that and tie it into ChatGPT. It'd be a little more interesting than tic-tac-toe.

    3. Stuart Castle Silver badge

      Re: When you feel old

      You feel old? I have T Shirts that are older than some of the users I deal with.

  4. Mike 137 Silver badge

    Overriding limitation

    From the examples in the paper, the most obvious characteristic of the simulated interactions is their shallowness. Neither the content nor any personality traits of the participants create an impression of real humans interacting. The "ability to remember past experiences, plan future actions based on their experiences, react appropriately to unexpected events, and reflect on their performance to improve their future actions"6.1 does not seem to have been entirely validated unless a very superficial interpretation, particularly of "react appropriately to unexpected events, and reflect on their performance to improve their future actions" is allowed. Furthermore interpretation of what constitutes "believable behavior" 6.5 is highly subjective (the same problem that beset the Turing Test). Personally, I'd very soon abandon any attempt at conversation when faced with banal responses such as those in the sample interactions. The fundamental (and probably insuperable) probem of course is that this automaton has zero understanding about the meaning of the ideas it's trying express.

    1. b0llchit Silver badge

      Re: Overriding limitation

      The fundamental (and probably insuperable) probem[sic] of course is that this automaton has zero understanding about the meaning of the ideas it's trying express.

      Wouldn't that describe a large part of the (disenfranchised) human population too? It takes quite an effort not only to understand an idea, but also to act on the premise the idea represents, as well as the freedom to do so.

      1. JohnSheeran

        Re: Overriding limitation

        Wouldn't it be fair to say that many of the "problems" society is facing these days comes from the varied mixture of ideas, meanings, responses and perceptions? Humans very often vary their behaviors based on so many factors that may include environmental, other human presence, other animal presence, situations and a potential myriad of other scenarios.

        These LLMs (much more preferable than AI) are interesting but they, as well as the concept of AI, seem to lack the ability to consider the many things that the layers of meaning create in their models. In all fairness, human beings seem to generally lack the ability to consider these things as well unless they are directly presented with a situational need to consider them.

        1. b0llchit Silver badge

          Re: Overriding limitation

          ...directly presented with a situational need to consider them.

          That is an important insight. Need is master and drive of invention and filters those who fail. AI has no "need" and has no "drive" to do anything. The experiment is basically an exercise in putting LLMs in an artificial virtual world and see what happens.

          1. HereIAmJH Silver badge

            Re: Overriding limitation

            AI has no "need" and has no "drive" to do anything.

            Couldn't you just program artificial need? We do it all the time with people. It's called governing through fear. People turn the natural inclination to select 'familiar' into prejudice and then use that to generate fear to steer society in a predetermined course. That fear is an artificial need.

            1. veti Silver badge

              Re: Overriding limitation

              Oh yes, that's just what we need. AIs with their own drives and desires, independent of what any human asks of them. What could possibly go wrong.

              1. HereIAmJH Silver badge
                Joke

                Re: Overriding limitation

                Hey, I'm just here to solve problems.

                This whole 'can' vs 'should' argument is above my paygrade.

                1. This post has been deleted by its author

            2. JohnSheeran

              Re: Overriding limitation

              A few questions Dave. :D

              1. Who is "you" that will program artificial need?

              2. How do you determine "need"?

              3. What does "need" actually mean?

              I could go on and on.

              One last gem regarding "need". To quote a humorous movie: "Is it necessary to drink my own urine? No. But I do it anyway because it's sterile and I like the taste."

      2. Mike 137 Silver badge

        Re: Overriding limitation

        "Wouldn't that describe a large part of the (disenfranchised) human population too?"

        Yes, but that very variation is one of the things that make human interactions recognisably human, and perusal of a decent dictionary shows the huge subtlety of it (just take a look at the range of meanings of the word "quite", some of which even depend on the tone of voice and inflection with which it's uttered). Understanding things differently directly implies understanding which is what's missing from these automata. They aren't really semantic engines (i.e. engines processing meaning in the human sense) they're template collage engines that plug together (to them) meaningless fragments on the basis of statistical coincidence.

      3. T. F. M. Reader
        Coat

        Re: Overriding limitation

        Wouldn't that describe a large part of the (disenfranchised) human population too?

        Yes, and this is the reason why I think the profession of "robopsychologist" will be in very high demand soon, treating both kinds of patients. If there is a real life Dr. Susan Calvin she must be just over 40 and at the peak of her career right now.

        1. The Oncoming Scorn Silver badge
          Thumb Up

          Re: Overriding limitation

          The ancient electric winch and maximum capacity eight persons jobs bear as much relation to a Sirius Cybernetic Corporation ‘Happy Vertical People Transporter’, as a packet of peanuts does to the entire West Wing of the Sirian State Mental Hospital.

          This is because they operate on the unlikely principle of defocused temporal perception - a curious system which enables the elevator to be on the right floor to pick you up even before you knew you wanted it, thus eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.

          Not unnaturally, many lifts imbued with intelligence and precognition became terribly frustrated with the mindless business of going up or down, experimented briefly with the notion of going sideways - as a sort of existential protest - demanded participation in the decision making process, and, finally, took to sulking in basements.

          At this point a man called Gardrilla Manceframe rediscovered and patented a device he had seen in a history book called a staircase. It has been calculated that his most recent tax bill paid for the social security of five thousand redundant Sirius Cybernetics Workers, the hospitalisation of a hundred Sirius Cybernetics Executives, and the psychiatric treatment of over seventeen-and-a-half-thousand neurotic lifts.

      4. Michael Wojcik Silver badge

        Re: Overriding limitation

        The fundamental (and probably insuperable) probem[sic] of course is that this automaton has zero understanding about the meaning of the ideas it's trying express.

        Wouldn't that describe a large part of the (disenfranchised) human population too?

        It arguably includes the entire human population, since pinning down what "meaning" means is one of the thornier epistemological problems. It is entirely possible that what we think of as "meaning" is simply a complex of associated concepts, and eventually transformer-like models1 might have a valid claim to be doing something equivalent.

        So calling that a "fundamental" problem is rather an article of faith.

        1I'd argue not the current crop of transformer LLMs, at least not until they're scaled up to the point where something like Boltzmann brains can spontaneously arise in parameter space. Unidirectional MLP or softmax transformer stacks (and linear transformer stacks even less so) simply aren't doing anything sufficiently complicated or with enough recurrence.

    2. Arthur the cat Silver badge

      Re: Overriding limitation

      From the examples in the paper, the most obvious characteristic of the simulated interactions is their shallowness. [etc]

      This probably means ChatGPT could write most soap operas. Most characters in them seem never to learn from past experience and exhibit random interests which last as long as a story arc and are then never mentioned again.

      [Wanders off humming The Archers theme tune.]

      1. veti Silver badge

        Re: Overriding limitation

        Sounds pretty true to life to me.

      2. Cheshire Cat

        Re: Overriding limitation

        I remember that as far back as 91, at University, a couple of us wrote a simple Eliza-type program that took generic input, and then programmed it up to write scripts for Neighbours (old Aussie soap opera starring Kylie for those of you who are still whippersnappers). Later datasets produced a university-based soap, rap songs, and a new soap imaginatively based on students in a computer lab.

    3. Norman Nescio

      Re: Overriding limitation

      The fundamental (and probably insuperable) probem of course is that this automaton has zero understanding about the meaning of the ideas it's trying express.

      How to we know you have any understanding of meaning and are not a philosophical zombie?

      ChatGPT and other 'AI's of that ilk (LLMs) are not very good philosophical zombies, but there does not seem to be a fundamental reason why they can't be improved to fool many people a lot of the time. Which is interesting and worrying at the same time.

      1. JohnSheeran

        Re: Overriding limitation

        ....improved to fool many people a lot of the time.

        I think they have accomplished that quite well already. :D

    4. veti Silver badge

      Re: Overriding limitation

      This is true, but not a big deal. This is version 1.0 of an experiment. Obviously it's got a long way to go yet, what else would you expect?

  5. Rich 11

    Realistic AI personal development

    [John]: Hey, have you heard anything new about the upcoming mayoral election?

    Five days later John drives his car headlong into a parade of the opposition candidate's supporters.

  6. Anonymous Coward
    Anonymous Coward

    – costing thousands of dollars in ChatGPT tokens to simulate two days –

    The only part that was not a hallucination.

  7. Ken Moorhouse Silver badge

    ...could not enter stores after they closed at 1700 local time...

    The defendant maintains that ChatGPT told him it was ok to break and enter the premises at 2 in the morning...

    1. ravenviz Silver badge

      Re: ...could not enter stores after they closed at 1700 local time...

      Two issues, one of societal norms which are endlessly complicated, not to say widely variable not only on context but also situation, and two, that of physical presence; “choosing to enter the bathroom when there is another person inside” may be a societal norm in some cases but does not factor in an external possibility that the door might be locked in some situations, but not in others. Isabella Rodriguez may well choose not to lock the bathroom door when Maria Lopez is in the house, “bathroom door lock is idle”, but when her parents are visiting, “bathroom door lock is in use”.

      1. emfiliane

        Re: ...could not enter stores after they closed at 1700 local time...

        The funniest part is that while the researchers admitted they didn't account for things like the single-occupancy bathroom (or door locks), they completely missed the fact that this is pretty normal behavior in many roommate situations, even if there's barely elbow room for two. Sometimes you both have 20 minutes to get ready for class/work/date/etc and there's just no way to take turns, so, you deal.

        Admittedly it's mostly women who are more OK with this, since we're mostly less inclined to roll out of bed into some pants and step out, but hell, that was the pair involved. Accidental emergent behavior. Many games' most beloved bugs are along these lines!

  8. steviebuk Silver badge

    Just try it

    Got to ChatGPT and ask it:

    What song contains the lyrics "And the day gets crazy, and I feel like I can't win."

    Tell us what it said.

    I've been arguing with it all weekend as I'm trying to work out what a song near the end of Columbo Cries Wolf is. Not the end credits tune, that is noted in the credits. But another song playing in the background. Shazam, despite it being clear, couldn't find it. So I asked ChatGPT.

    It not only was confidently wrong, it then admitted it was wrong then was confidently wrong again. The best respond to further questions relating to this subject were, where I just added "from 1990" on the end. It said the song was "Pray" by MC Hammer. It then proceeded to them tell my the lyrics appear in the second verse of the song. It then wrote out that second verse, which not only didn't contain anything from my question but also isn't even a verse from the MC Hammer song.

    1. Tom66

      Re: Just try it

      If you really want to break ChatGPT, ask it what day of the week February 29th, 2025 is on (or any other non-leap-year).

      It's of course not a real date, but it will give you an answer:

      > February 29th, 2025 falls on a Saturday. It is a leap year, and leap years occur every four years, with the exception of years divisible by 100 but not by 400. Since 2000 was a leap year, and 2024 is also a leap year, February 29th will occur on a Saturday in 2025.

      If you then question it, "Are you sure?", you get:

      > I apologize for the error in my previous response. Since the cutoff for my training data is in 2021, I made a mistake in calculating the day of the week for February 29th, 2025.

      > To correct my previous answer, February 29th, 2025 falls on a Friday. It is a leap year, and leap years occur every four years, with the exception of years divisible by 100 but not by 400. Since 2000 was a leap year, and 2024 is also a leap year, February 29th will occur on a Friday in 2025.

      You can continue indefinitely, every time you question it, it will change its answer.

      Confidently incorrect describes it well.

      1. seldom

        Re: Just try it

        Sounds like they based it on my last manager

  9. VonGell

    ChatGPT does not possess one and single set of traits which characterizes one person, and have no only one bias, being molded this way. The presence of individuality for a game is the crucial thing though, because the individuality is what can produce unexpected moves and deeds, both foolish and wise. Therefore, instead of using this giant ChatGPT model, one needs to construct an incredibly smaller and unthinkable cheaper model for a particular person, which contains all his knowledge and also his bias; there the bias will be deduced from the choice of phrases — let's say this model is approximately 500.000-2 million phrases instead of 150 billion for ChatGPT. (I call such models "lexical clones", because each reflects the inner nature of its prototype)

    1. Norman Nescio

      one needs to construct an incredibly smaller and unthinkable cheaper model for a particular person, which contains all his knowledge and also his bias; there the bias will be deduced from the choice of phrases — let's say this model is approximately 500.000-2 million phrases instead of 150 billion for ChatGPT. (I call such models "lexical clones", because each reflects the inner nature of its prototype)

      There's a science fiction story (I don't have the time to look for it in my library) where someone produces an android clone of themselves which is programmed to learn the person's characteristics so it behaves in the same way, and converses in the same way. It is then set to monitor the person, and once the dementia has progressed too far in the android's (learned/simulated) opinion, the android kills the person, then acts as their executor.

      The trouble is, even if ChatGPT had access to all my written output, I don't think it, or any other language model could convincingly simulate me. I'm more than my written output, and my experiences are not just lexical input. The project to build a 'life recorder' - an always on video and audio recording - might give enough data, but even then, it doesn't record your feelings about what you interact with, and even a description of your memories (a talking biography) will be woefully incomplete - think of Proust.

      I think ChatGPT and other LLMs are a 'parlour-trick', that while amusing and possible occasionally useful, don't solve the problem of 'Artificial Intelligence'. I suspect that things like the defunct Cog project, and Doug Lenat's Cyc are needed (as well as much else) to be able to successfully simulate a human. It's the difference between 'book-learning' and 'learning by experience', coupled with 'common sense' knowledge about the world and how to make human-identical inferences.

      It's a fascinating area. If nothing else, in order to pass a 'Turing Test', an AI either has to be able to lie, or believe itself to be human. Otherwise, simply asking the question "Are you human?" would be a simple way of determining which interlocutor was which. Either prospect is worrying.

      1. Brewster's Angle Grinder Silver badge

        ChatGPT has got the lying down pat.

        I think the inner life, as expressed by certain novelists, isn't real. And irrelevant, if not, since I can't directly access it.

        What matters is whether ChatGPT can predict you. To wit, if we fed it your corpus to date, would it's response to this prompt be "the same" as yours (a) all the time, (b) some of the time...? The more "same" the response and the more often it was the "same" the more we are likely to believe it was you. But it doesn't attempt to simulate the vat of hormones and chemicals which perturb your thinking, so it's unlikely to succeed.

        1. VonGell

          This is pure utilitarian search technology, nothing else. OpenAI added re-writing to the serch.

  10. Plest Silver badge
    Pint

    Few things AI might be useful for

    One of the few things I can accept AI might be useful for, "intelligent" NPCs in a game that will interact with you in a more subtle way than constantly repeating the 5 lines they've been coded with.

    1. Red Or Zed

      Re: Few things AI might be useful for

      Could they then be considered

      Epic NPCs?

      Anyway, it's a nice day for fishin'

      1. breakfast
        Happy

        Re: Few things AI might be useful for

        *Sad villager sobbing*

  11. ecofeco Silver badge

    What would it look like?

    Wankery?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like