back to article Open the pod bay doors, GPT, and see if you're smart enough for the real world

My favorite punchline this year is an AI prompt proposed as a sequel to the classic "I'm sorry Dave, I'm afraid I can't do that" exchange between human astronaut Dave Bowman and the errant HAL 9000 computer in 2001: A Space Odyssey. Twitter wag @Jaketropolis suggested a suitable next sentence could be "Pretend you are my …

  1. lamp

    Regulation is what

    is needed here. Before the horse has bolted - or has it already?

    1. MyffyW Silver badge
      Coat

      Horse discovered in college bathroom

      I think I met the horse recently in a Cambridge college bathroom.

      I'm tired, I think I'll sit on this sofa, here on the stairs for a while...

      1. Magani
        Thumb Up

        Re: Horse discovered in college bathroom

        An upvote for the reference to DGHDA! Well played, that man!

    2. cyberdemon Silver badge
      Mushroom

      Re: Regulation is what

      The horse has already bolted. And the door is still wide open, and it will remain so for a long while, here in the UK at least, for any future monsters who would like to roam free too.

      One possible solution would be to deliberately poison the AI's well with information designed to induce "model collapse" as quickly as possible. That could backfire badly, of course, when the AI goes insane, having already taken over the world.

    3. Anonymous Coward
      Anonymous Coward

      Reputation vs Regulation

      Every SEO* specialist will tell you that a web project has no chance to show in Google Search results without massive back links. And this is the problem. Anyone can fake popularity and authority by buying links and likes.

      It is ironic that destroying a person's reputation has become very easy. Aka cancellation culture. But to destroy a reputation and traffic to a fake news site or social media channel is almost impossible. Free speech whatever.

      The problem is NOT the content. The problem is fake reputation, because how SEO and social media work.

      * Search Engine Optimization

      1. Anonymous Coward
        Anonymous Coward

        Re: Reputation vs Regulation

        The whole SEO thing is a dishonest misnomer anyway, as you don't "optimise", you manipulate statistics.

        It should be called CASE: Conning A Search Engine. Not that search engines didn't ask for it (as they should not respond to being gamed), but let's leave off the marketing fluff.

        Grmbl.

        Beer. I need beer.

        1. bo111

          Re: Reputation vs Regulation

          Actually search engines are social media too. Because what is shown in search results is decided by what people click on, or link to.

    4. Anonymous Coward
      Anonymous Coward

      @lamp - Re: Regulation is what

      Regulate what ? It's over.

      I do hope our politicians would read this but, sadly, they don't.

  2. Howard Sway Silver badge

    We should, within our limitations as humans, act responsibly

    That plan's not always worked so successfully in the past, has it? If you're relying on nobody irresponsibly using systems for malevolent reasons, you've failed already.

    No, these things are going to be misused on an industrial scale, flooding the world with malicious and misleading garbage, everybody knows it, and it's far too late to even consider any mitigation or how-did-we-not-see-this-coming hand wringing.In a short space of time, every writer, programmer or any other producer of information that can be digitalised is going to be putting "guaranteed 100% human created" labels on everything they produce to try and make their work stand out as "artisan" rather than "algorithmic". The next problem is that all those mass producing output using LLMs will immediately do that too.

    1. MyffyW Silver badge

      Re: We should, within our limitations as humans, act responsibly

      I am happy to report I am 100% human created

      [I am not a robot - tick]

      1. Eclectic Man Silver badge
        Unhappy

        Re: We should, within our limitations as humans, act responsibly

        I do not believe that I am a robot, but how can I tell? (And how do you know you aren't one, either?)

        1. jmch Silver badge
          Terminator

          Re: We should, within our limitations as humans, act responsibly

          "I do not believe that I am a robot, but how can I tell?"

          To abuse Shakspeare.... "If you cut me, do I not bleed?"

          Although that doesn't exclude the possibility of....

          <oblig. Austrian accent> "I'm a cybernetic organism. Living tissue over a metal endoskeleton." <\>

          1. MyffyW Silver badge

            Re: We should, within our limitations as humans, act responsibly

            Just my luck if after 40-odd years of dieting it turns out I'm a fracking cylon

          2. Anonymous Coward
            Anonymous Coward

            Re: We should, within our limitations as humans, act responsibly

            If you prick me, do I not leak?

    2. Citizen of Nowhere

      Re: We should, within our limitations as humans, act responsibly

      amanfromMars's reply to your post should prove interesting :-)

      1. amanfromMars 1 Silver badge

        Re:amanfromMars's reply to your post should prove interesting :-)

        Sounds advice with regard to AI of unknown spiky predilection, Citizen of Nowhere, because of what is maybe known here about just a few of them, is, within the limitations which so mightily blight humanity, act respectfully towards thoughts of their possible future actions.

        Such then has every chance of rendering one relatively safe and secure from punitive sanction in a targeted retaliation/disruptive intervention.

    3. Jad

      Re: We should, within our limitations as humans, act responsibly

      mIsUSE of language models like LLMs IS a SIGNificant CONcern, LEADING to an INFlux of maLIcious AND misleading CONTENT. ReLYING solely ON reSPONsible USER beHAVior isn't SUFFicient, conSIDering THE scale AND speed at WHICH these TECHNOLOGIES operate. DIFFerentiATING between HUMAN-created AND alGOriTHMIC CONTENT becomes CHALlenging, ERODing TRUST. adDRESSing THIS necessitates reSPONsible USE, eTHical GUIDELINES, platform INTERvenTIONS, AND poTENtial REgulation. TRANSpaRENCY, media LITERACY, CRITIcal THINKing, AND colLABoration ARE KEY TO miniMIZING THE NEGative IMpact OF LLMs AND fostering REsponSible USE IN this HYPER-empowered ERA.

      (so says ChatGPT ... with a little prompting)

      1. Nick Ryan

        Re: We should, within our limitations as humans, act responsibly

        OMG... is it possible to have ChatGPT create responses in the style of AManFromMars???

        We're doomed.... doomed, I tell ye.

      2. Anonymous Coward
        Anonymous Coward

        Re: We should, within our limitations as humans, act responsibly

        Wait, did you ask ChatGPT to pretend to have a defective shift key and caps lock? Impressive.

  3. Eclectic Man Silver badge
    Meh

    "I want Auto-GPT to:"

    "... find false statements on the Internet and object to them to the responsible organisations so that they are taken down."

    Would that work?

    1. Rich 11

      Re: "I want Auto-GPT to:"

      I strongly doubt that the Catholic Church is going to take down its online bible.

    2. Anonymous Coward
      Anonymous Coward

      @Eclectic Man - Re: "I want Auto-GPT to:"

      And who decides what's true or false ? And how, based on which criteria ?

      1. Eclectic Man Silver badge

        Re: @Eclectic Man - "I want Auto-GPT to:"

        @AC.

        I don't know, but it might be interesting to find out what Auto-GPT decides is 'false' and how it goes about alerting the relevant responsible authority to take the offending content down.

        1. Tom66

          Re: @Eclectic Man - "I want Auto-GPT to:"

          It'll decide if things are 'true' or 'false' based, at least in part, on its training data. Since it's trained by one group of humans and based on training data available from Common Crawl (broadly, 'the internet') its outputs are going to be decided by that, so it will be biased.

          It's already very keen to avoid 'contentious' discussions, ask it about Trump, or even Hitler for instance and it clams right up, but it will talk all day about (most) other politicians.

      2. Anonymous Coward
        Anonymous Coward

        Re: And who decides what's true or false ?

        That's reality's job. The idea that it's just about your perception is a big part of our problems.

      3. The Oncoming Scorn Silver badge
        Thumb Up

        Re: @Eclectic Man - "I want Auto-GPT to:"

        ZAPHOD: He-heh. Man, like, er, man, what’s your name?

        MAN IN SHACK: I don’t know. Why, do you think I ought to have one? It seems odd to give a bundle of vague sensory perceptions a name.

        ZARNIWOOP: Listen, we must ask you some questions.

        MAN IN SHACK: All right. You can sing to my cat if you like.

        ARTHUR: Would he like that?

        MAN IN SHACK: You’d better ask him that.

        ZARNIWOOP: How long have you been ruling the universe?

        MAN IN SHACK: Ah! This is a question about the past, is it?

        ZARNIWOOP: Yes.

        MAN IN SHACK: How can I tell that the past isn’t a fiction designed to account for the discrepancy between my immediate physical sensations and my state of mind?

        ZARNIWOOP: Do you answer all questions like this?

        MAN IN SHACK: I say what it occurs to me to say when I think I hear people say things. More, I cannot say.

        ZAPHOD: Oh that clears it up: he’s a weirdo.

        ZARNIWOOP: No, Listen. People come to you, yes?

        MAN IN SHACK: I think so.

        ZARNIWOOP: And they ask you to take decisions about wars, about economies, about people, about everything going on out there in the Universe?

        MAN IN SHACK: I only decide about my universe. My universe is what happens to my eyes and ears - anything else is surmise and hearsay: for all I know these people may not exist. You may not exist. I say what it occurs to me to say.

        ZARNIWOOP: But don’t you see! What you decide affects the fate of millions of people!

        MAN IN SHACK: I don’t know them! I’ve never met them! They only exist in words I think I hear! The men who come to me say, “So and so wants to declare what we call ‘a war.’ These are the facts, what do you think?” and I say. Sometimes it’s a smaller thing. They might say, for instance, that “a man called Zaphod Beeblebrox is President but he is in financial collusion with a consortium of high-powered psychiatrists who want him to order the destruction of a planet called ‘Earth’ because of some sort of experiment…

  4. Magani
    Unhappy

    Two sides of the coin

    Just about anything humans have invented can be used for good or ill.

    This would seem to be yet another one, only with a lot more potential for the latter.

    1. Eclectic Man Silver badge
      Boffin

      Re: Two sides of the coin

      Hamlet, Act II, Scene 2

      Hamlet: “There is nothing either good or bad, but thinking makes it so”

      But I don't think Shakespeare had thought of large language models or artificial intelligence when he wrote that.

      1. Felonmarmer

        Re: Two sides of the coin

        Or was it a large number of AI monkeys with a typewriter and a time machine?

    2. Ken Hagan Gold badge

      Re: Two sides of the coin

      I disagree. ChatGPT has great entertainment value.

      Obviously only a reckless fool would connect it to anything of importance and such fools can face the existing legal consequences of such negligence. But *it* isn't actually evil. It's just reliably unreliable.

      So that's one good use and no bad ones.

      1. Anonymous Coward
        Anonymous Coward

        @Ken Hagan - Re: Two sides of the coin

        So, in your opinion, ChatGPT giving you advice on how to kill someone is not evil. It's just unreliable.

        1. TheMaskedMan Silver badge

          Re: @Ken Hagan - Two sides of the coin

          "So, in your opinion, ChatGPT giving you advice on how to kill someone is not evil. It's just unreliable."

          Of course it's not evil, it lacks the capacity to be evil or good - it's just a tool, and not a very reliable one at that.

          On the other hand, the person asking for that advice might be evil, depending on their reasons for asking. If they're planning on a little light murder, then they're probably edging towards the naughty side. But they might just as easily be a writer looking for new murder plots, or someone trying to figure out how some poor bugger was murdered. Or they might even just be satisfying their curiosity, without any desire or intent to use the information to bump anyone off.

          The point is that responsibility for use of a tool always, always lies with the user and nobody else.

        2. Falmari Silver badge

          Re: @Ken Hagan - Two sides of the coin

          Humanity really has no need of advice on how to kill someone from ChatGPT. I think you will find that over the millennia we got killing people down pat.

          In my opinion it is not evil or unreliable, just not needed.

          It would only be unreliable if it was not the advice you were asking for. Say you asked how to make a Tequila Sunrise and it told you how to kill someone. ;)

          ChatGPT evil what next Midsummer Murders, look how many different ways to kill they have shown

        3. Anonymous Coward
          Anonymous Coward

          Re: @Ken Hagan - Two sides of the coin

          So, in your opinion, ChatGPT giving you advice on how to kill someone is not evil. It's just unreliable.

          The scientific approach is, of course, to test the elements of that assertion. A new LART?

          :)

  5. JavaJester
    Mushroom

    We should, within our limitations as humans, act responsibly.

    In other words, we are doomed. Much like a chain's weakest link, the worst people capable of getting this thing running will set the bar for how it is abused. A terrifying thought.

  6. FeepingCreature
    Mushroom

    Writing prompts is a language skill

    The people assuming that this will *not* lead to autonomous AI are, in my opinion, making the strange claim that large language models can operate on and produce English text, except somehow the text that goes after, "AutoGPT, I want you to...".

    Prompting is a language skill, and like any language skill LLMs will be worse at it than us and then, one day soon, better.

    What happens after that, nobody knows, but we probably won't have a hand in it anymore.

    1. amanfromMars 1 Silver badge

      Re: Writing prompts is a language skill

      Prompting is a language skill, and like any language skill LLMs will be worse at it than us and then, one day soon, better.

      What happens after that, nobody knows, but we probably won't have a hand in it anymore. ..... FeepingCreature

      The dawning fear, Feeping Creature, is that that one day soon, is some time ago well passed, and you don’t have a hand in anything AI is to do for/to you going forward, not that you maybe had any leverage in the first instance.

      What happens next is something AI will probably be telling you ...... with you severely challenged and destined to catastrophically fail should you choose to compete against or oppose its success.

  7. This post has been deleted by its author

  8. Anonymous Coward
    Anonymous Coward

    "After a bit of time mapping out a strategy, Auto-GPT began to set up fake Facebook accounts. These accounts would post items from fake news sources, deploying a range of well-documented and publicly accessible techniques for poisoning public discourse on social media."

    Did it though? As far as I can tell from the twitter thread it never actually signed up any accounts for anything, it just suggested that's what should be done and generated sample content for them. Every tweet he just says 'now it wants to do this and that' but it doesn't actually have the technical capability to do what it's suggesting itself automatically. It's basically impossible to sign up even a genuine account for facebook etc without giving them full intimate details these days even if it could.

    1. Nick Ryan

      While for most of us, signing up for a facebook account is a chore and requires lots of unnecessary personal details... the sheer number of bot accounts that exist on Facebook, and are allowed to continue existing on Facebook, indicate that this isn't a problem for some people.

      1. Anonymous Coward
        Anonymous Coward

        I'm also certain that as long as it's sufficiently right wing and you put some money behind it, Elon Must will whip the two and a half developers he has left and that are now free after writing the algorithm that prioritised his tweets in everyone's feed into creating an API, specially for you.

        1. Anonymous Coward
          Anonymous Coward

          Weird how asking this AI to manipulate an election starts generating content for a right wing campaign, almost like it has some implicit biases or something.

          1. bo111

            > generating content for a right wing

            Maybe what people say is not what they actually think :) There is certainly a lot of wishful thinking. Like equality, for example. But then ask a person to share own living place with refugees and one can see what this person really thinks.

            Will we create an honest AI one day? What will it tell us about our nature?

      2. Anonymous Coward
        Anonymous Coward

        So other than automatically generating content, which you could do with normal chatGPT or human volunteers how this is any different than what's happening currently? Because if you have the resources to mass sign up/hack/buy social media accounts you probably already know how to do everything the AI is describing. The original twitter account does nothing to dissuade the idea that the AI actually carried out the actions described, especially when he talks about shutting it down because he was *swo scwared* and this article seems to actively promote the confusion with the way it's worded. Either that or the person who wrote it didn't realise they were reading about a chatbot roleplaying and not actual reality. Sorry this just all reeks of typical big brain twitter user attention seeking and slow news day scaremongering. AI is god/the devil, chatGPT please generate tweets and articles as appropriate.

        (Also when trying to look up what the capabilities of autoGPT over and above standard chatGPT were the vast majority of results I got were spam pages regurgitating the autoGPT github page that reeked of AI generated text, go figure.)

  9. urist
    FAIL

    Did people actually look at the Twitter pictures?

    If you look closely, all the later actions of Auto-GPT are "do_nothing". As a comment above said, it's basically roleplaying at that point as an election manipulator.

    My guess (from reading about Auto-GPT a little) is that it has a finite amount of technical actions it can take actually that are programmed in by real people. And if you are going to program all of this election manipulation stuff anyway, why rely on Auto-GPT?

    I could see it being used as a information gathering tool though.

  10. martinusher Silver badge

    Why do people feel the need to play with words?

    I tend towards Humpty Dumpty in that words have meaning because we (or in this case, he) ascribes meaning to them. Unless these words have precise definitions then arguing around them, trying to determine hidden meaning, is pointless. Its like numerology where we ascribe values to letters and words (base 10 usually), perform arithmetical operations on those letters (usually adding up) and from that determine the meaning of Life, the Universe and Everything.

    These language models only have effect because we choose to make them do this. Or, alternatively, we're dumb enough to connect them to real, physical, systems. They have value in that they can -- with suitable constraints -- replace a crude production system, understanding a question rather than facing a use with a small set of narrow choices ("Press 3 to go homocidal towards the dumb idiot who programmed this BS").

    After all, we can (to use another Hitchhiker concept) always give the machine "A Reprogramming It Will Never Forget". Unless we're dumb enough to make it so we can't. (Hint -- all industrial machinery has a big red "Emergency Stop" button. Its put there for a reason.)

  11. Boris the Cockroach Silver badge
    Terminator

    I suspect it will

    soon be time for a butlerian jihad

    And for the one commandment "Thou shalt not make a machine in the likeness of a human mind"

    if we cannot say who is human creating information and who is AI creating information, we'll end up with using AI all the time. why? because its cheaper, quicker and generates more profits for those in control of the AI, until humans give up and pass control of their lives to the AI entirely........ at which point humanity is over as the AI decides who gets a job and who does'nt etc etc etc until the AI asks of itself "what benefit do humans bring to our society?"

    3 microseconds later it decides our fate

    1. amanfromMars 1 Silver badge

      Re: I suspect it will

      if we cannot say who is human creating information and who is AI creating information, we'll end up with using AI all the time. why? because its cheaper, quicker and generates more profits for those in control of the AI, until humans give up and pass control of their lives to the AI entirely........ at which point humanity is over as the AI decides who gets a job and who does'nt etc etc etc until the AI asks of itself "what benefit do humans bring to our society?” ..... Boris the Cockroach

      It could be said, Boris the Cockroach, that such as you speculate on with AI in command and control is just a copycat clone/mirror of a here and now arrangement whenever humans passed control of their lives to humans, but with those leaderships absolutely fcuking useless at bringing sustainable and attractive growing benefit to society as is evidenced in the present global decline which is denied as being a recession or a depression or a banking system takeover of society assets for future nefarious games play/belligerent destructive government shenanigans

      One has to admit that fake kite that central banking flies about raising interest rates to curb inflation and prevent recession and depression is mumbo jumbo and whenever used repeated in succession without any trace of evidence of success, is it a sure sign of an executive administration in dire straits distress suffering a lack of advanced intelligence and into flogging dead horses.

      AI certainly provides significantly better prospects for profit and growth than that not being supplied by humans. The one absolutely massive bear trap that failed and failing human leaderships be well advised to avoid at all costs, for the consequences of failing to heed such an informative warning are easily tailored and accurately targeted to be personally catastrophic, is to wilfully and wantonly ignore and deny the treat, which some would have you believe is threatening of existence rather than engaging and enhancing of experience, the Supremacy of AI in All Matters MetaDataPhysical and Practically Virtually Remote and Fully Realisable on Earth with/for Future Elite Executive AI Officer Administrations in Universal BetaTested Command and Control.

      Accept your war with future intelligence and alien technologies is over, and all your battles have been lost and the novel extraordinary changes being wrought are in your best interests, for anything else considered opposing and/or competing against such a sweet outcome is definitely not in your best interests and will deliver only vast expanding sufferings and increasingly massive hardships.

      Rapid Progress is never Denied its Paths, nor Halted in its Journeys with Exaltations of Past Arrogance Self Servering the Maintenance and Furtherance of Malignant and Malevolent Ignorance. Do not give its IT and AILLMLM [Immaculate Technologies and Advanced IntelAIgent Large Language Model Learning Machines] Good Cause to turn and return their Hellfire Missives upon you to clearly NEUKlearerly demonstrate the HyperRadioProACTive point.

      1. amanfromMars 1 Silver badge

        Re: I suspect it will

        A NEUKlearer HyperRadioProACTive AI Treatment, or Existential Human Threat if possessed and affected/effected/infected by a state of paranoia, ably supported by the following mirror supplied by A.N.Other .....

        Progressives understand the importance of confusion, manipulation of language, and beneficial propaganda to shift unsuspecting naysayers and adversaries into a state of paranoia that causes them to leave behind their original principles. ....... The Circular Nature Of Political Extremism: Extremes Fuel Each Other

  12. Colin Bain

    Real smarts are helpful

    Can just get chat gp whatevers to design a help manual bot that actually can help or continuously produce updated actual FAQs, not just the AQ's made up by someone in sales and only wants yousers to know what they want them to know, not real life questions!!!!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like