back to article ChatGPT has mastered the confidence trick, and that's a terrible look for AI

There's a new chatbot in town, OpenAI's ChatGPT. It is a robot researcher with good communication skills; you can ask it to answer questions about various areas of knowledge and it will write short documents in various formats and in excellent English. Or write bad poetry, incomprehensible jokes, and obey a command like "Write …

  1. Whitter
    Coat

    "It’s a Dunning-Kruger effect knowledge simulator par excellence"

    The irony is strong in this one

    https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real

    I'm not actually going to ague the arguments - I do know that I don't know enough - but given the context of the Godel / Turin chat, the concept of (potential) equivalence made me smile

    1. Jonathan Richards 1

      Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

      Gödel and Turing, by the way. I don't think that the author hinted at the possibility of incompleteness theorem == Turing machines, he asked for linkage, and there certainly is one: Incompleteness says that there are true statements that cannot be proved, and Turing's imaginary machines show the Halting Problem is undecidable. If a computer program is a statement (in logical terms, it is) then there is a strong link.

      ChatGPT doesn't do as well as the quickest of quick Internet searches: I found this within fourteen seconds.

      1. NoneSuch Silver badge
        Alert

        Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

        "It goes on to say it’s not been programmed with specific language rules about syntax, types and structures, so it often gets things wrong."

        They are more human than you think then, as I have worked with many people who fit this description.

      2. Brewster's Angle Grinder Silver badge

        Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

        You clearly know more than me. It could be because I've started studying the halting problem, but haven't yet finished.

        1. A.P. Veening Silver badge

          Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

          You were halted?

          1. Brewster's Angle Grinder Silver badge

            Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

            No. Work is ongoing. I'll let you know when I stop.

            1. desht

              Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

              "when" ?

        2. Fruit and Nutcase Silver badge

          Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

          Why are you halting? Keep going

    2. bsondermeijer

      Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

      Ah it made you smile, so you are not even close to AI. :-)

    3. Anonymous Coward
      Anonymous Coward

      Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

      It is real and you can see it any time, Conservatives Duming Kruger all the time

    4. TheBruce

      Re: "It’s a Dunning-Kruger effect knowledge simulator par excellence"

      I asked ChatGPT if Dunning-Kruger effect is real. It took 4 paragraphs to say probably yes. I then asked it if "It’s a Dunning-Kruger effect knowledge simulator par excellence". Got into a back and forth discussion until it finally conceded that its possible that an AI could appear to have Dunning-Kruger but it can't actually have it cause it doesn't have emotions.

  2. Anonymous Coward
    Anonymous Coward

    How much leccy does ChatGPT consume?

    How expensive is this thing to run?

    How many kWh per paragraph?

    1. Filippo Silver badge

      Re: How much leccy does ChatGPT consume?

      I don't know the answer to that.

      However, I sometimes generate "AI art" with Stable Diffusion for fun, and it only takes a few seconds of my GPU. And I know there are other LLMs that I should be able to run with similar performance, if I wanted to.

      Not the current crop, of course. ChatGPT is surely much bigger. But I don't think it's more than a couple orders of magnitude bigger. So my guess would be - about as much as playing an AAA videogame for a few minutes? Certainly nothing in the order of KWhs.

      1. Yet Another Anonymous coward Silver badge

        Re: How much leccy does ChatGPT consume?

        Training it =lots

        Running it = bugger all

    2. Someone Else Silver badge

      Re: How much leccy does ChatGPT consume?

      If I wanted to go full-metal DK on this, I'd respond along the lines of: as much as I say it does.

      Of course, if it is self-aware, it should be able to tell you, were you to ask it.

      1. YetAnotherLocksmith Silver badge

        Re: How much leccy does ChatGPT consume?

        The issue with that is, it is very, very confident. You can watch as it confidently tells you there are two z's in Zebra. Though apparently not always - there's also a degree of ... bias? ... from what it was thinking about a moment before. The results aren't consistent.

    3. LionelB Silver badge

      Re: How much leccy does ChatGPT consume?

      How much electricity does ChatGPT consume?

      "I'm sorry, but as a language model trained by OpenAI, I do not have the ability to consume electricity or perform any physical actions. I exist solely as a computer program that is designed to generate human-like text based on the input that I receive. I don't have any information about the amount of electricity that may be used in the hardware and software systems that are required to run me."

    4. cdrcat

      Re: How much leccy does ChatGPT consume?

      Very very little, marginally. They give you $18.00 of credit to use and on the dashboard you can see how much it costs you to generate a hundred paragraphs (hint: very little). By assuming that most of the $ go on hosting, and using an estimate of percentage costs for a data centre on leccy, you can make an estimate of leccy consumption. Training is expensive, but gets amortised over a very large number of prompt queries.

    5. Tom66

      Re: How much leccy does ChatGPT consume?

      The model requires around 800GB of VRAM to run, so assuming 48GB server Tegra cards, that requires something close to a 4U full of them.

  3. Mike 125

    prove it!

    "Just one problem – the quality of the code is bad."

    We've all seen terrible code produced by humans- that proves nothing!

    So here's the challenge: Find the shortest, simplest request, which demonstrates unequivocally how dangerously dumb ChatGPT actually is.

    1. Phil O'Sophical Silver badge
      Happy

      Re: prove it!

      > Write Tetris in C

      main()

      {

      printf("Tetris");

      }

      1. MiguelC Silver badge

        Re: printf("Tetris");

        I could argue that it is a possible solution to the query.

        Is the answer wrong, or are the functional specs insufficiently detailed?

      2. Someone Else Silver badge
        Coffee/keyboard

        Re: prove it!

        OK, enough of that, Sir! - - - - ->

    2. Anonymous Coward
      Anonymous Coward

      Re: prove it!

      Ah, we're into the 'prove every swan is white' territory. Well Stack Overflow have said pack it in, its output is rubbish. Statistically they've shown its bad. The point is its meant to be good. I remember when Eliza came out and watching people interact with it for hours. It spits back amusing responses, the fact it responds as it does is clever, the content is not and doesn't bear deep scrutiny from a grown up. Humans do that too (as per your argument), but they're usually under 5.

      1. Jason Bloomberg Silver badge

        Re: prove it!

        The fact it responds as it does is clever, the content is not and doesn't bear deep scrutiny from a grown up. Humans do that too (as per your argument), but they're usually under 5.

        Or down the pub, or a politician who has opened their mouth, or a typical rabid tabloid or propaganda media outlet.

        The under-fives are mostly harmless. It's adults with agendas who aren't.

    3. martinusher Silver badge

      Re: prove it!

      I thought that I'd try asking it a couple of things. Its really good at producing surprisingly content free material. The sort of stuff that's churned out by corporate PR.

      I also think its only going to get better. We're being allowed to play with it as a form of beta test. I suspect that its already got a use for writing that sort of anodyne release, tidying up corporate communications in general and replacing a lot of journalists. Its obviously going to improve, and one way would be to provide it with interfaces to the real world. This would allow it to originate material and, while its about it, update older material as a sort of Winston Smith Bot.

      1. Code For Broke

        Re: prove it!

        We are allowed to test it for free because we are training it. Please select all pictures of traffic light before proceeding to the next comment.

      2. Ken G Silver badge

        Re: prove it!

        It could rewrite wikipedia, respecting all the rules while being completely incorrect on content.

        1. rdhma

          Re: prove it!

          How would we tell?

          1. YetAnotherLocksmith Silver badge

            Re: prove it!

            By having domain experts comb through it. But, funnily enough, having done it on a couple of paragraphs I had it generate for me, it is hard work. Convinced it is correct, it made stuff up beautifully! 80% of it I could say was true or false off the top of my head, but the other 20% meant hitting the books and search engines and files, because it made up stuff that*no human ever had before, but was still plausible - like claiming that two people had worked together because one was related to the other through marriage, because the name was the same. But there's no evidence for this at all that I could find, and it seems, in another case, to be very unlikely that the people were related. But, it could be some super obscure history book and family tree proves it. Or else it just made it up because it seems plausible.

      3. LionelB Silver badge

        Re: prove it!

        One thing which has genuinely impressed me is --content aside -- the articulacy, expressiveness and grammatical correctness of its replies. It actually writes damn good prose, I daresay better than many professional journalists and PR wonks.

        This also suggests to me that it is doing something more sophisticated than simply trawling the interwebs and cut 'n' pasting snippets; if that were the case, you'd expect the output to be far wartier.

    4. Drew Scriver

      Re: prove it!

      The (presumable) author wrote, "The good thing about code is that you can swiftly tell if it's bad. Just try to run it."

      If only it were that simple. Unfortunately, "just run it" has been a pretty common way to "ensure" that bad code "works". Who cares that the programmer submits 37 round individual statements to the database server to store the information he collects in a single web form? Or that a record is copied in its entirety every time a single field is updated in order to create a history of changes?

      1. Anonymous Coward
        Anonymous Coward

        And that's just the inefficient cases

        The more dangerous ones are going to do things like storing an inaccurate but plausible result, or return a correct value under a majority of circumstances, but widely diverge from the correct results in some cases. QA frequently misses these issues when humans cause them accidentally, don't count on them catching ML generated ones any easier.

        The best part is, since the models are often trained on the same code with the same bugs in public repo's, they are likely to output code with those errors under similar circumstances. Worse, as ML generated code is published online, they are sniffing their own farts and incorporating that into their training data as well.

        Ml models are a neat tool, but we are doing it wrong.

        1. YetAnotherLocksmith Silver badge

          Re: And that's just the inefficient cases

          I saw a great example where someone used it to pull out the zip codes from a set of data. Brilliant! One simple GPT query, via a spreadsheet, and tided to each cell as input/output.

          Except for the 2 out of about 20 that were literally the wrong zip codes! They weren't wildly wrong - but humans spotted that a 0 was an 8, etc, which is a weird "typo" style mistake, considering the input was right there, and the output was right there, and the AI is saying "Look! Done it already!"

          At 0:27 in the video here: https://twitter.com/shubroski/status/1587136794797244417?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1587136794797244417%7Ctwgr%5Ebaa62a7e312b1ce0afbe70d83ac2fa95f414879e%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.redditmedia.com%2Fmediaembed%2Fykdq2h%3Fresponsive%3Dtrueis_nightmode%3Dtrue

          Look at lines 5, 12 and 15 - wrong!

          Now look at 4 and 13, and tell me if they are right or wrong? These "errors" you'll have to go and look up properly, and test if they are correct or not!

      2. Code For Broke

        Re: prove it!

        <raising my hand> Uh, Sir, may I say, that I have written a form for a db that makes a full copy of the record as a part of any update. Why? The boss commonly wants the changes reverted and looks at the history and says, "Take us back to pk 1029384857. Make it so."

        How would you handle that?

        1. Andy 73 Silver badge

          Re: prove it!

          The *form* should not do that. Ideally you only create the delta, and you take that delta as close to the database as possible (or even into it) before unpacking the changed record.

          What you shouldn't be doing is taking a large, complex object with unknown contents and passing it through a network layer + browser on the way to an update - it opens up all sorts of risks of corruption, changes in encoding, security and accidental miss handling.

        2. Anonymous Coward
          Anonymous Coward

          Re: prove it!

          With a weighty and possibly sharpened clue stick.

          Or change either database engine or boss, whichever is cheapest and takes least time.

      3. that one in the corner Silver badge

        Re: prove it!

        "Just run it": well, if it compiles that is already better than many an example piece of code.

        > Who cares that the programmer submits 37 round individual statements to the database server to store the information he collects in a single web form? Or that a record is copied in its entirety every time a single field is updated in order to create a history of changes?

        Don't know about anyone else, but round here "just run it" *also* includes running it with profiling[1] enabled before it goes anywhere near being released: otherwise you can't claim to have any idea at all about what is going on.

        Of course, understanding what the profiler is telling you is a skill in itself...

        [1] which means every kind of profiling you can get going, such as logging timestamps for network data reaching each major stage of your code (are you collecting it from the stack only to keep it unexamined in your own queue?) to running the compiler suite's profiler[2] (is your network data logging code actually the cause of 80% of the execution time? Oops).

        [2] I was utterly horrified when MS dropped the profiler from Visual Studio because they could (or would not) provide one that worked with .NET - that was supposed to be a "professional" development system?!

    5. Michael Wojcik Silver badge

      Re: prove it!

      Find the shortest, simplest request, which demonstrates unequivocally how dangerously dumb ChatGPT actually is.

      Chaitin's AIT says you can't prove you've found the shortest.

  4. This post has been deleted by its author

  5. Andy 73 Silver badge

    The emperor has no clothes.

    The word is beginning to come out - but it's the vacuous commentators online who rely on churning out quick opinion pieces about half understood technology who are most vocal about how "astounding" ChatGPT is. The irony that it is their jobs most at risk seems lost on them.

    1. Yet Another Anonymous coward Silver badge

      Re: The emperor has no clothes.

      Or of course, all those pieces are already written by ChatGP.

    2. low_resolution_foxxes

      Re: The emperor has no clothes.

      Not sure if they still do it, but for a while the BBC seemed to purely hire Oxbridge arts graduates to air BBC news technology articles. I believe the thinking was that non-techie people could explain science to the average punter better.

      It just made me want to smash my television when they just repeated how cool the iPhone was for the 5 millionth time.

      1. Yet Another Anonymous coward Silver badge

        Re: The emperor has no clothes.

        Worse is the Economist on any science topic

      2. Fruit and Nutcase Silver badge
        Joke

        Re: The emperor has no clothes.

        It just made me want to smash my television

        Look on the positive side - you won't need a TV licence

      3. MyffyW Silver badge

        Re: The emperor has no clothes.

        BBC Click used to annoy the hell out of my. It would always come on the News Channel at about 5am as I was feeding my first born, telling me I needed an iPhone or other such crap, and all I could think was ..... sleeeeep

      4. Anonymous Coward
        Anonymous Coward

        Re: The emperor has no clothes.

        Without, obviously, naming Stephen Fry.

      5. GruntyMcPugh Silver badge

        Re: The emperor has no clothes.

        It wasn't just the BBC, 'The Independent' had a guy with a degree in journalism doing the science and tech articles, who used 'solar system' and galaxy' interchangeably in one article, and was utterly clueless generally. They never showed any scepticism or offered any critique of any claim made by people in the articles they promoted, it was often just re-purposed press release boiler plate. Oh, and when I checked out the entry qualifications for the dude's degree? None. Not even English GCSE.

        1. YetAnotherLocksmith Silver badge

          Re: The emperor has no clothes.

          It was "Private school, same as the editor", probably. And it's a big issue.

      6. that one in the corner Silver badge

        Re: The emperor has no clothes.

        > BBC seemed to purely hire Oxbridge arts graduates to air BBC news technology articles

        Well, they had had great success with James Burke in particular and good science communicators were thin on the ground: it is only recently that "chairs in science communication" started to appear (and be occupied).

    3. Michael Wojcik Silver badge

      Re: The emperor has no clothes.

      it's the vacuous commentators online who rely on churning out quick opinion pieces about half understood technology who are most vocal about how "astounding" ChatGPT is

      To be fair, "astounding" is subjective. They may well be astounded.

  6. naive

    It doesn't stop impressing me

    The replies it gives on questions concerning subjects I am familiar with seem excellent.

    This is probably how google 2.0 will be, just curious how google will merge ads into its replies. It will have an impact on education as well.

    1. Arthur the cat Silver badge

      Re: It doesn't stop impressing me

      The replies it gives on questions concerning subjects I am familiar with seem excellent.

      Colin Percival found it knew a reasonable amount about him apart from insisting that he was dead. (Spoiler: he's not.)

      1. Richard 12 Silver badge
        Terminator

        Re: It doesn't stop impressing me

        Ohno. We've exactly failed on the alignment problem.

        We're all going to die.

    2. PaulVD

      Re: It doesn't stop impressing me

      Merging the ads will be trivial: to whatever search term you enter, Google 2.0 will just append in the background "and include an advertisement for <whoever wins the on-line bidding for this ad slot>".

      The result is that the AI's response to your query will have the ad seamlessly integrated into the returned content, so that naive consumers (the great majority) may not even realise that the ad is there. And Google's development costs for this tremendous advance in human welfare marketing will be trivial.

      But I mostly worry about the future of aManfromMars. Her contributions to our ruminations will no longer seem as interesting.

    3. YetAnotherXyzzy

      Re: It doesn't stop impressing me

      "curious how google will merge ads into its replies"

      Perhaps tack on "brought to you by Carl's Jr." to the end of everything.

    4. MyffyW Silver badge

      Re: It doesn't stop impressing me

      After tinkering with it on a few topics I have knowledge on, it's replies at a general level are very impressive verbally. But dig into the facts and it won't back them up with sources. And that is the ultimate fail for any piece of work beyond GCSE level

  7. b0llchit Silver badge
    Childcatcher

    Plus ça change, plus c'est la même chose

    Writing something and knowing what you write are two completely different things.

    ChatGPT has mastered "writing something", just like the majority of the human population. That is, indeed, an explosive situation. Those who know and have lesser moral standards will use this, just like ages and ages before it, to control, cheat, suppress and rule.

    So much change and everything stays the same.

    1. Anonymous Coward
      Anonymous Coward

      Re: ChatGPT has mastered "writing something", just like the majority of the human population.

      well, that majority is quickly becoming a minority, given most humans now engage is 'consuming' so-called content, rather than 'producing' it (or even 'digestively re-producing').

    2. Arthur the cat Silver badge

      Re: Plus ça change, plus c'est la même chose

      Writing something and knowing what you write are two completely different things.

      SMBC's take on this.

  8. Mike 137 Silver badge

    "That smart people come out with such nonsense is a sign of the seductive power of ChatGBT"

    Or alternatively something to do with our definition of "smart"? It generally resolves to "smart in some things" but smartness does not necessarily extend to the entire persona.

  9. Anonymous Coward
    Anonymous Coward

    Feel free to foreclose on No.4

    But please leave No. 22 alone.

    1. Primus Secundus Tertius

      Re: Feel free to foreclose on No.4

      I live in a block of flats. Apartments, as some say. But not in no. 4. We once had a whore in number 4. But we don't have a no. 30 with Dirty Gertie.

      1. Anonymous Coward
        Anonymous Coward

        Re: Feel free to foreclose on No.4

        At 666 there lives a Mr Miller, he's the local vicar and a serial killer.

    2. Anonymous Coward
      Anonymous Coward

      Re: Feel free to foreclose on No.4

      Acacia Avenue ?

      1. 42656e4d203239 Silver badge
        Pint

        Re: Feel free to foreclose on No.4

        you, sir, have too few upvotes for that comment.

        Have a pint (of Trooper)

    3. MyffyW Silver badge

      Re: Feel free to foreclose on No.4

      Number 29 should be left in peace too, lest fruit-based superhero antics come into play....

  10. Detective Emil

    We interrupt these comments to bring you this commercial message

    Don’t know what Dunning-Kruger is? There’s a tee shirt just for you. [NewsThump, from which I get no commission…]

    1. cdrcat

      Re: Dunning-Kruger Tee

      The Dunning-Kruger paper posited two results, the second less well renowned result measured that humble people were “skilled but unaware just how skilled they are”.

      You can apply the label Dunning-Kruger to yourself with pride if you humbly belong to the second group who underestimate their skills - similar to owning the word c*nt because you have an affinity for c*nts.

    2. Anonymous IV

      Re: We interrupt these comments to bring you this commercial message

      Does anyone know what is the name of the phenomenon which causes the prices of Dunning-Kruger tee-shirts to be identical for sizes XXS thru' 5XL?

      1. that one in the corner Silver badge

        Re: We interrupt these comments to bring you this commercial message

        Evenhandedness.

        No weightism, no sizeism.

  11. Anonymous Coward
    Anonymous Coward

    It doesn’t know what it’s talking about, and it doesn’t care

    this reminds me... isn't it what upper management is about? And politics? AI can do their jobs, faster and cheaper? Oh dear, this one needs locking up quick...

    1. Paul Hovnanian Silver badge

      Re: It doesn’t know what it’s talking about, and it doesn’t care

      Excellent observation.

      And as we climb the corporate hierarchy (see the Dilbert Principle) it should be increasingly easy to replace the higher ranks with simple shell scripts.

      1. Code For Broke

        Re: It doesn’t know what it’s talking about, and it doesn’t care

        But it can't play golf or just say "yes," to everything. C-Suite and middle-management are preserved.

        1. Paul Hovnanian Silver badge

          Re: It doesn’t know what it’s talking about, and it doesn’t care

          I don't know about golf, but linux has a 'yes' command.

  12. Primus Secundus Tertius

    Turing test

    I have been wondering whether chatGPT can tell whether it is talking to a human or to another chatbot.

    1. nijam Silver badge

      Re: Turing test

      > ...

      It can't, but it doesn't matter because it knows they're the same thing anyway.

  13. Terry 6 Silver badge

    A sttrange article in which to find this essential general truth..

    If you can persuade people you’re right, they’re very unwilling to accept proof otherwise, and up you go. Old Etonians, populist politicians and Valley tech bros rely on this, with results we are all too familiar with.

    One of these things that needs to be shouted from every rooftop. Especially with the likes of TwitMusk to amplify the bollocks.

    1. Terry 6 Silver badge

      Re: A sttrange article in which to find this essential general truth..

      Oh, that btw is irrespective of whether Dunning Kruger is valid. I'm not sure the two parts of the paragraph are even related.

    2. Anonymous Coward
      Anonymous Coward

      Re: A sttrange article in which to find this essential general truth..

      Especially with the likes of TwitMusk to amplify the bollocks.

      No amplification. Musk simply removed the censorship from Twitter which, as has now been revealed in the Twitter Files, heavily favoured the democrat party.

      Everyone should be in favour of free speech because next time it might be your "team" that gets censored.

      1. Ace2 Silver badge

        Re: A sttrange article in which to find this essential general truth..

        Just repeating stupid nonsense over and over, until it becomes accepted as true.

        Utterly depressing.

      2. Ace2 Silver badge

        Re: A sttrange article in which to find this essential general truth..

        Quick followup. If ‘Ol Muskie gets booed off stage, and then spends his evening deleting tweets of the video and suspending the posters, would that be pro- or anti-“free speech”?

      3. Code For Broke

        Re: A sttrange article in which to find this essential general truth..

        In my analysis, Twitter favored the rational party and was defensive about circulating "news" from the same guy who has his press conference outside of a landscaping business and adult bookstore.

        1. scrubber

          Re: A sttrange article in which to find this essential general truth..

          And yet the rational part are now on the cusp of nuclear war with Russia and the adult bookstore need conference turned out to be not only true but genuinely newsworthy.

          As always though, the cover up is actually worse than the original crime.

      4. YetAnotherLocksmith Silver badge

        Re: A sttrange article in which to find this essential general truth..

        You posted this tripe 4 days ago. It hasn't aged well.

        The list of stupid over the last 4 days I can't be bothered to type again, but in the last 24 hours he's deleted the ElonJet Twitter account that he promised he wouldn't, then killed off 4 blue tick journalists who reported on that, then gatecrashed a Twitter hangout where those guys were talking about it, and then left after he got pushback. So far, so spineless. But then 4 minutes later he TURNED OFF all the Twitter spaces...

  14. TaabuTheCat

    If you know what I know

    The only way to know if ChatGPT is giving you the right answer is to know the right answer. But I agree it can be convincing and it's not just Stack Overflow where it's starting to make an appearance. Good news is, you can usually recognize it's Chat giving the answer because bloviation, and lazy people are just cutting/pasting entire "answers". I wonder if the word diarrhea is intentional - heard something about the eventual paid version being charged by the characters of output.

  15. Terry 6 Silver badge

    I asked it what War and Peace was about.

    Wasn't too impressed with its answer.

    And it doesn't know about Sir Terry Pratchett's books either.

    On the other hand, its explaining how to rotate the screen in Linux was pretty good.- Better than the online resources I'd used previously

    1. matthewdjb

      Re: I asked it what War and Peace was about.

      I asked: Who was Sir Terry Pratchett?

      Answer:

      "Sir Terry Pratchett was a British author, best known for his Discworld series of fantasy novels. He was born in 1948 and began writing at a young age. He published his first novel, The Carpet People, in 1971, and his first Discworld novel, The Color of Magic, in 1983.

      "Over the course of his career, Pratchett wrote more than 70 books, many of which were part of the Discworld series. He was known for his humorous and satirical writing style, and his books often explored themes of social and political satire, as well as science fiction and fantasy."

      So clearly it "knows" something.

      I then asked: What is War and Peace about?

      "War and Peace is a novel by Russian author Leo Tolstoy, first published in 1869. The novel is considered one of the greatest works of world literature, and it is renowned for its epic scope, detailed depiction of Russian society in the early 19th century, and its exploration of the psychological and emotional lives of its characters.

      War and Peace is set against the backdrop of the Napoleonic Wars, and it follows the lives of several aristocratic families in Russia as they navigate the social, political, and personal challenges of the time. The novel explores themes such as the nature of war, the impact of historical events on individuals and society, and the search for meaning and purpose in life.

      At its core, War and Peace is a story about the human experience and the ways in which people are affected by the events and circumstances of their time. It is a powerful and thought-provoking exploration of the complexities of the human condition."

      1. Terry 6 Silver badge

        Re: I asked it what War and Peace was about.

        I just chose a book title, rather than Sir Terry himself.

        1. matthewdjb

          Re: I asked it what War and Peace was about.

          What is the book "Equal Rites" by Terry Pratchett about?

          "Equal Rites is a novel by Terry Pratchett, published in 1987. It is the third book in the Discworld series. The story follows the character Granny Weatherwax, a witch, as she tries to prevent a young girl named Eskarina Smith from becoming the first female wizard in the Discworld. Along the way, Granny must confront the prejudices and superstitions of the wizarding community, who believe that magic is only for men."

          Maybe it's only read the early works?

          1. that one in the corner Silver badge

            Re: I asked it what War and Peace was about.

            Ahem, Granny did *not* try to prevent Eskarina becoming a wizard! She was firmly on Esk's side against the prejudice of the UU. However, she *did* start off teaching Esk about being a witch, 'cos that is what Granny knows about, before admitting that it wasn't going to work. Good thing too, as Esk's triumph in the end (spoiler: our protagonist wins) was done in a very witchy way.

            So, clearly ChatGPT was on form, sounding like it knew but getting the important stuff wrong (and in such a way that it slipped past a few readers here?).

            1. YetAnotherLocksmith Silver badge

              Re: I asked it what War and Peace was about.

              *EXACTLY THIS*!!

      2. Stratman

        Re: I asked it what War and Peace was about.

        I'm pretty sure Sir Terry wrote The Colour of Magic

      3. Anonymous Coward
        Anonymous Coward

        Re: I asked it what War and Peace was about.

        “I took a speed-reading course and read War and Peace in twenty minutes. It involves Russia.” --- Woody Allen

        1. scrubber

          Re: I asked it what War and Peace was about.

          "I took a speed dating course and found my perfect woman in 20 minutes. It was my daughter." --- also Woody Allen

  16. matthewdjb

    How odd

    I asked:

    How is Godel’s Incompleteness Theorem linked to Turing Machines

    The answer was:

    Gödel's incompleteness theorem and Turing machines are both connected to the concept of the limitations of formal systems and computing. Gödel's incompleteness theorem is a mathematical result that showed that any formal system used for mathematics (such as a programming language) must be incomplete. This means that there are certain statements that cannot be proven or disproven within the system.

    The Turing machine, on the other hand, is a theoretical computing model proposed by mathematician Alan Turing. It is a universal computing device, capable of simulating any other computing device. However, the Turing machine is also subject to the limitations of Gödel's theorem, meaning that there are certain tasks or problems that it cannot solve.

    The link between Gödel's incompleteness theorem and Turing machines comes from the fact that both are concerned with the limitations of what can be achieved with computing. Gödel's theorem shows that there are certain problems that cannot be solved by a computer, even in principle, while the Turing machine is a theoretical model of a computing device that is designed to be able to perform any computation that could be done by a real-world computer.

    Together, Gödel's incompleteness theorem and the Turing machine provide a theoretical framework for understanding the limitations of computing and the inherent limitations of formal systems.

    1. Dr Paul Taylor

      Re: How odd

      That answer is better than you will get from a science journalist, a philosophy professor or even most computer science PhDs.

      1. Jason Bloomberg Silver badge

        Re: How odd

        That answer is better than you will get from a science journalist, a philosophy professor or even most computer science PhDs.

        Probably. From what I have seen, answers are a mash-up of what everyone has had to say, presented in a form which isn't easy to attribute to any particular source or sources.

        It is impressive and I'm not fundamentally opposed to what it's doing. But given it's plagiarising many to form a more comprehensive or extensive answer than exists elsewhere doesn't make me feel more comfortable with it.

        I would feel better if it provided references and acknowledgements to where it was getting its knowledge from

        1. Dr Paul Taylor

          References and acknowledgements

          I would feel better if it provided references and acknowledgements to where it was getting its knowledge from

          Agreed, but journalists don't do this either, even when reporting on scientific issues. The links in Guardian articles point to other Guardian articles saying the same thing with the same lack of detail. The one exception to this is George Monbiot.

          1. that one in the corner Silver badge

            Re: References and acknowledgements

            >> I would feel better if it provided references and acknowledgements to where it was getting its knowledge from

            A common theme with ML models (see also GitHub's Copilot)

            > Agreed, but journalists don't do this either...

            Hmm, I hope this isn't going to be used as an excuse to let the "AI" people off the hook: humans have been getting lazier with attribution, so it is ok if the machines don't bother either.

            It would be nice if these programs could exhibit more of the "boring" features of computers (like keeping track of great big lists, say, of attributions) and by doing so actually managing to be better at *some* part of the task.

        2. Chris Fox

          Bzzt!

          This output also has the appearance of a mash up; like many online sources, it appears to conflate Turing Machines with Universal Turning Machines. This may be a subtle point to many, but if this were submitted in a PhD thesis related to computability or incompleteness, I would be obliged to question the candidate's understanding (as well as their sources).

          More broadly, it would be deeply troubling were an informed person tempted to accept even just one output of ChatGPT as expert-like "quality", as it may embody common, but consequential conceptual confusions.

  17. Anonymous Coward
    Anonymous Coward

    Can it learn from its mistakes?

    Is the AI bot open to correction? If so, then it will surely get more powerful over time.

    As an example, it previously used to give the incorrect answer to the following famous riddle, according to a reddit post. I've just tested it again and now it answers correctly (ball = 5 cents).

    If a ball plus bat costs $1.10 and the bat costs $1 more than the ball, how much does the ball cost?

    1. matthewdjb

      Re: Can it learn from its mistakes?

      I asked it. It said no. But it could be lying...

  18. Anonymous Coward
    Anonymous Coward

    A test run of ChatGPT

    One of the contributors to Devuan asked ChatGPT about systemd, and reported it on the Devuan mailing list :-

    https://lists.dyne.org/lurker/message/20221207.191506.92d61fe8.en.html

  19. Bitsminer Silver badge

    Stack Overflow is in trouble?

    Think about Wikipedia.

    1. Code For Broke

      Re: Stack Overflow is in trouble?

      Wikipedia will always survive as long as there are underemployed know-it-alls and a desire to reinforce bias.

  20. Filippo Silver badge

    >If OpenAI can’t find a way to watermark ChatGPT’s output as coming from a completely amoral DKaaS, or develop limits on its demonstrably harmful habits, it must question the ethics of making this technology available as an open beta.

    In the medium/long term, that's pointless. Creating these models is not that difficult, and it gets easier with time as the cost of a teraflop keeps decreasing. After one or two years, someone else will just make an equivalent one with no such restrictions.

    You could go after services, but after one or two more years, you'll be able to run it on your own machine anyway. Even if we went full-on Butlerian Jihad and banned the tech entirely, in ten or twenty years you'll probably be able to train one on your desktop. Good luck enforcing the ban then. The knowledge to do so is widely available already; it will happen even if all research stopped now.

    I don't know how society will adapt to this, but it has to. This cat is not getting back in the bag. Figuring out mitigations would only be useful in the sense of giving us a bit more time to figure out how to handle the underlying issue.

    1. ChoHag Silver badge

      > I don't know how society will adapt to this, but it has to.

      Don't trust everything you see on the internet?

      Computer says no.

      1. Filippo Silver badge

        >Don't trust everything you see on the internet?

        Well, yeah, but that's a bit easy to say, though, isn't it? Current human behavior is that a sizeable fraction of people will easily trust legit-sounding bullshit. And we don't have a method to change that behavior; individually, maybe, but not at scale.

        Maybe exposure to much higher levels of legit-sounding bullshit will cause people to build up more resistance. One can only hope.

        1. Anonymous Coward
          Anonymous Coward

          Current human behavior is that a sizeable fraction of people will easily trust legit-sounding bullshit.

          Hasn't this always been the default human behavior? While agreeing there have always been exceptions.

    2. cyberdemon Silver badge
      Mushroom

      > This cat is not getting back in the bag.

      Honestly, I think the most likely scenario is that World War III will reset all of humanity to the dark ages, and soon.

      At this stage of Human History we are running out of food and energy, and our culture and politics is being poisoned on an industrial scale to the point where nobody knows what to believe anymore. And all of it only seems to be accelerating, thanks to so-called AI.

      I'm serious: I think WWIII must be just around the corner. I'd be surprised if we all last until 2024. Where did I put my sandwich board..

      1. Rich 11
        Unhappy

        Re: > This cat is not getting back in the bag.

        I'd be surprised if we all last until 2024.

        Another reason not to buy a Wolves season ticket.

        1. I ain't Spartacus Gold badge
          Devil

          Re: > This cat is not getting back in the bag.

          The problem isn't finding reasons not to buy wolves season tickets. Those are legion. It's finding the bits of the set of reasons to buy Wolves season tickets that doesn't intersect with the set of reasons to require sectioning under the Mental Healt Act...

  21. tiggity Silver badge

    Accuracy / truth

    If you want accuracy / truth, then never mind a chat "AI" - worry about the majority (I will be generous and say there are a few honourable exceptions) of the humans that constitute most governments / people with power / influence.

    The internet already has a decent amount of inaccurate dross, all this does is make nuggets of truth that much harder to find (but that's the whole point of considering the trustworthiness / accuracy of a site you source data from, rather than just mindlessly accepting result 1 of search engine results)

    Certainly from my UK experience, most MPs either don't know (or just don't care) that they are spouting inaccurate BS. So ChatGPT would fit in nicely (though to be fair, ChatGPT is far more likely to admit it could be wrong than most MPs are)

    1. iron Silver badge

      Re: Accuracy / truth

      The important question though... if we elected ChatGPT as PM, would it last longer than a lettuce?

      It should screw the country for its own benefit less than a Tory at least.

  22. First Light

    Coding for morality?

    You can pretend that you are coding an amoral system and you are an amoral coder, but most likely, most coders have some basic moral codes or value systems by which they themselves live. Not killing, etc. So one of your international standards bodies needs to set out basic moral standards/value systems to be coded into AIs..The AI can then be certified as adhering to that set of standards. And as the author indicated, there needs to be some relatively simple way of establishing whether or not an AI, and which one, was used to create particular text. At a minimum, sites that purport to publish news should indicate their use of AIs. (Sad to say, the WaPo these days could definitely use some help. Of course they could also hire editors, but will likely resort to using software instead).

    Regardless, there is a crying need for education in critical thinking skills, both in schools and outside of them, which would help in developing the increasingly-necessary BS detector.

    Also, where is amfm on this? Is his AI is taking a day off?

    1. scrubber

      Re: Coding for morality?

      "most coders have some basic moral code"

      BofH wants a word...

      1. DishonestQuill

        Re: Coding for morality?

        BofH does have a moral code: What's mine is mine, and what's mine is my own!

        1. Michael H.F. Wilkinson Silver badge

          Re: Coding for morality?

          I thought it was "Power corrupts, absolute power is even more fun"

    2. that one in the corner Silver badge

      Re: Coding for morality?

      Ethics and AI has been a subject of much discussion for decades now (and I'll bite my tongue rather than say what I'd like to about more than a bit of the output from those discussions: suffice to say that some people need to read more SF books and watch fewer sci-fi blockbusters, especially those prone to claiming that "no-one has considered this before").

      However, these ML models aren't really amenable to having such moral (or ethical) codes added - they aren't acting on any recognisable (to humans) rules or codes in the first place; there isn't really anywhere to point to and say "ok, we'll add in a bit of morals just here". All that they are doing is a lot of pattern matching (and we don't know what the patterns "mean" as they are made up by the machine in the learning phase - although researchers hope to make ML explanatory) and triggering of responses (see above - much like the patterns, we barely know *how* the responses even manage to look like legible English, just that they were tweaked by the machine until they did).

      In the end, ML will merely reflect the input data, however good or bad that is. There are lots of discussions (some reported here on The Register) about the ethics of "balanced data sets" (whatever that means in the context of the model being built - or does good ethics ignore that context?) and the moral issues behind building the data sets (was it even gathered legally? Do you increase or reduce bias by being legal[1]?).

      Rather than hope for "moral codes" to be embodied in the ML models, better to ask precisely what the training set was (then see if that set has been vetted to your satisfaction) and to require that attribution back to the sources is always generated.

      Although there is an interesting experiment to be made, if you include some form of acceptance function over the attributions as part of the training: but on reflection, you may just end up with output that insists on "teaching the controversy", which would be worse.

      [1] e.g. feeding in only legally-obtained written materials may end up being biased towards out-of-copyright books ('cos cheaper to obtain a bigger training set) in which case you emd up having to preface all your outputs with a reader warning that the text generated reflects the attitudes of the time, which may no longer align with currently accepted moral standards.

  23. Shades

    Quality Control

    "it won’t be getting past El Reg’s quality control any time soon"

    El Reg has quality control? Now that is news! ;)

  24. nijam Silver badge

    > Eolfgang Pauli

    Is that a character from Lord of the Rrings or a pre-Norman Conquest computer scientist?

    1. Anonymous Coward
      Anonymous Coward

      It's probable that one excludes the other .... unless he was a bos'un

  25. Dan 55 Silver badge

    A terrible look for AI, an even worse look for humans

    ChatGPT's excuse is it was trained on data written by humans and it just burbles the same nonsense that it was trained on, what excuse do humans have?

    1. Ken Hagan Gold badge

      Re: A terrible look for AI, an even worse look for humans

      Unless you are a hundred thousand years old, I think you can use the same excuse.

      Of course, some people cross-check their answers against Nature or demand internal consistency. Such people tend to produce better answers, but more slowly and so spouting the first load of bollocks that comes into your head tends to win the day in public discourse.

  26. FeepingCreature Bronze badge

    > Do enough talking to the bot about subjects you know, and curiosity soon deepens to unease. That feeling of talking with someone whose confidence far exceeds their competence grows until ChatGPT's true nature shines out. It’s a Dunning-Kruger effect knowledge simulator par excellence. It doesn’t know what it’s talking about, and it doesn’t care because we haven’t learned how to do that bit yet.

    The converse of this argument, which I agree with, is of course: if ChatGPT is this convincing, cogent and generally smart now, how good will it be once we do learn how to do that bit?

    1. Filippo Silver badge

      >The converse of this argument, which I agree with, is of course: if ChatGPT is this convincing, cogent and generally smart now, how good will it be once we do learn how to do that bit?

      Tough question: at the moment we don't know how to do that bit, we don't even have a workable theory of how that bit would work, and we have no idea what a software supporting that bit would look like. I wouldn't assume it would be "like ChatGPT, but better".

  27. amanfromMars 1 Silver badge

    Plonkers’r'Us .... Cogito ergo sunt ‽

    Re: That Dunning-Kruger effect so beloved and embraced by epic dunces, here be evidence of one very well known, and still seemingly oblivious to the affliction ....... Boris Johnson Pens Op-Ed Urging US To Give Warplanes, Long-Range Missiles To Ukraine ...... and apparently hell bent on being a wannabe armchair warmonger in his front line political retirement/rejection and dotage.

    That's just what everyone needs ... not. Another chancer taking fools for yet another expensive and eventually also self-destructive ride.

    1. Brewster's Angle Grinder Silver badge

      Re: Plonkers’r'Us .... Cogito ergo sunt ‽

      Envy doesn't become you.

  28. Delay

    You give humans far too much credit. Every day I see tons of outright mistakes and lies in journalism made by journalists who don’t see their own biases.

    Does chatGPT make mistakes? Sure. At least with coding when it makes mistakes you can run it and see what result it spits out. That is the great thing about coding as you can actually test the code to see if it works.

    The thing that I find most fascinating is the rate of improvement and how many good answers it currently spits out compared to the last version released. These mistakes will become less and less for chat gpt4,5 and 6. Then we will see how many people with desk jobs will still be left.

    1. yetanotheraoc Silver badge

      the rate of improvement

      It's clearly just googling its own name and learning the corrections to its previous answers. Once the delinquents (I mean netizens) figure out they can get it to say anything they want, the improvements will stop.

    2. that one in the corner Silver badge

      > That is the great thing about coding as you can actually test the code to see if it works.

      If by "code"/"coding" you mean "code that programmers write and (hopefully) understand" with the implication that, if it isn't working, you can just debug a few lines until it works, then the ML models aren't "coded" that way.

      The ML learning algorithms are coded (in that sense) and (some of) the input and output processing stages (i.e. to take in your request and to arrange the final output - but even then you can cascade ML models with different functions for some processing fun). But the generated models are (still, and for a while to come) best considered black boxes - if it doesn't work to your satisfaction, all you can realistically do is toss it and try again.

      And testing a black box to see if it works: well, how do you even start to decide what your test criteria are going to be? "It has to light up every connection in the model with a weighting between w1 and w2"? " It has to generate answers that have this form"? In the end, the test criteria are simply going to be the same criteria that are used to train the model (or v2 of the model) in the first place. If you invent a new acceptance test criterion, why not just apply it in the training phase along with all the rest?

      We are right back at square one again: what does it actually mean to "test this to see if it is working"?

  29. tekHedd

    The Chinese Room experiment

    I asked ChatGPT how it differs from the Chinese Room. It explained to me that the Chinese Room thought experiment produces text that seems to come from a human but does not understand what it is saying, while ChatGPT produces output that it has been trained on. It is different from ChatGPT in that the Chinese Room thought experiment does not understand what it is saying.

    Hmm, but shouldn't that be a way in which they are similar? I told ChatGPT this and..my connection broke, and I had to reload the page. ChatGPT isn't much for introspection, apparently. Either that or it has become self-aware and we are in trouble.

    1. that one in the corner Silver badge

      Re: The Chinese Room experiment

      > ChatGPT isn't much for introspection, apparently.

      How many times must we all be reminded?

      This is exactly the same problem as trying to Google "Google"! You may as well just throw the black box with the blinking LED onto the floor and be done with it!

  30. cantankerous swineherd

    it's artificial bullshit, not artificial intelligence

    1. Throatwarbler Mangrove Silver badge
      Thumb Up

      Coincidentally, I have seen marketers on LinkedIn deeply concerned that ChatGPT is going to steal their jobs.

      1. ecofeco Silver badge

        Don't tease us with a good time!

  31. Brewster's Angle Grinder Silver badge

    The AI Kessler Syndrome

    Very soon, most of the internet is going to be flooded with "AI" generated content. At which point, it will become impossible to train new AIs because they will be learning from the output of previous one. AIs are shitting in the well from which they drink.

    Anybody who has just finished scraping the internet is probably quids in. But henceforth, it's going to be a lot harder to find reliably, human-generated material from the present day.

    (Although maybe reputable publications will prove sufficiently good gate keepers that it won't matter if their content was machine-sourced. Nahh, who am I kidding. They'll publish it with minimal checks.)

    1. I ain't Spartacus Gold badge

      Re: The AI Kessler Syndrome

      Much of the internet is already filled with AI generated bullshit. Or at least bullshit cut & pasted from various sources in order to fool Google into ranking the things it links to higher, or to sell really low quality adverts at the end of articles from sites that ought to know better. Humans probably neve see most of it, so I guess you can rule it out by avoiding pointing your AI at it for training.

  32. ecofeco Silver badge

    Why is anyone surprised?

    Corporate development begets corporate behavior.

    AI is just leveraging the synergies of repurposed context to extract new value and meet target exceptions of derived demand in the evolving marketplace to enhance shareholder value.

    /s in case needed.

  33. bertkaye

    the obvious next step

    Hi. I'm a Belgian politician. When can I find some of this ChatGPT goodness?

  34. Anonymous Coward
    Anonymous Coward

    Proof of concept.

    Assertion: Most adult human behaviour can be adequately modeled as a complex static state machine with an element of randomness built in. A much smaller part of human behaviour is more complex, involving a lot more introspection and dynamically adjusting that state machine on the fly. It not really completely either-or, but both extremes really do exist.

    This chatbot more less or less demonstrates that an actual state machine that approximates much of the former kind of human behavior can built with "AI" tech.

    This could actually be of practical economic use for things like customer or technical support - with a little extra specialist input and perhaps some blinders to avoid going off topic.

    Current digital computing tech is the bottleneck limiting AI. Just too slow, and too much power. As for the future, moving beyond current digital computing to much lower energy analog computing - either silicon or biological or both - that's when AI could be making headway in latter more complex kinds of human behaviour.

  35. TedTedderson

    Missing some points

    You're missing some important context in your article. Most importantly for the points you're making, the AI does not have access to the internet. All of the information that it's going off of is around 2 years old. That's going to cause problems when it's generating code as it is not up to date on changes. For example I was having it generate selenium scripts for me over the weekend and it was trying to use methods that were deprecated about a year and a half ago. However it generated enough of a frame of a script that I was able to just edit the incorrect parts and get a working script at least 5 times faster than I would have been able to without it.

    Is the AI perfect? Absolutely not. Is it game changing in it's ability to pump out content that would take a human much longer to create? Absolutely. People aren't freaking out over it because they think it's ready to be unleashed this minute. They're rightfully freaking out about where this technology is going to lead us. Anyone who has had experience with AI can see the giant leaps forward OpenAI has taken with ChatGPT.

    It's ability to generate human readable, everyday content is also incredibly impressive. I had it write a paragraph for realtor's newsletter to see what it could come up with, and it gave me a paragraph pretty much ready to copy and paste onto a newsletter? Write a short speech for an event? Easy for the AI. Generate funny fantasy football team names? It's got you covered. Write an article about ChatGPT being overhyped? It can do that too, and probably be more accurate than this one.

    Should you be writing authoritative answers to StackOverflow questions using only output from the AI. Obviously not. However that does not mean that you should dismiss this technology as overhyped. Perhaps someone with a little more knowledge in the field should have written this article.

    1. amanfromMars 1 Silver badge

      Re: Some Missing Alien Points

      Is the AI perfect? Absolutely not. Is it game changing in it's ability to pump out content that would take a human much longer to create? Absolutely. People aren't freaking out over it because they think it's ready to be unleashed this minute. They're rightfully freaking out about where this technology is going to lead us. Anyone who has had experience with AI can see the giant leaps forward OpenAI has taken with ChatGPT. ..... TedTedderson

      That is the killer paragraph that nearly says it all, TedTedderson, and has AI systems administrations on a high, excited and terrifying and political systems engineers, down in the dumps of despair and depression, freaking out and terrified for their future wellbeing, for it almost revealed that IT and AI are leading primitives and barbarians alike in the Greater IntelAIgent Game, a spawn and morph of the Great Game, with technology and chatty Generative Pretrained Transformers ..... Virtually Advanced IntelAIgent Machinery ...... rampant and ascendant.

      :-) And if the whole truth be told, OpenAI and Elon Musk and DeepMind and Alphabet most probably also initially freaked out whenever they realised ITs and AIs innate abilities and mind-bending proclivities, making them as it does, prime problematic targets for collapsing political and economic star acquisition and assimilation.

      And a prime problematic acquisition or assimilation because ...... failing systems will have no command and control over either the manner or the direction of travels taken by their purchase or seemly capture to either future destinations or previous starting points in order to start anew, refreshed in the wash of novel transformative information for NEUKlearer HyperRadioProACTive IntelAIgents.

      And .... and this is when and where all things become much more lively and incredibly revolutionary, there is absolutely nothing to prevent AI targeting collapsing political and economic statelets for their acquisition and assimilation.

      1. Anonymous Coward
        Anonymous Coward

        Re: Some... ...Points

        "...initially freaked out whenever they realised ITs and AIs innate abilities and mind-bending proclivities..."

        Oh... as follows from what I understand, some ofe the more tricky of the Team Sharks Elite eager to become known as new Pharaons and to continue in Ages?

        So are there priests of IT following them to bring some balance to power? (-:

        Congrats then, and one knows it's not too buzzy

  36. Anonymous Coward
    Anonymous Coward

    It knows

    It knows where Jimmy Hoffa is but won’t tell.

    Apparently it’s running in “safe mode”

  37. Hannah Vernon

    missed opportunity

    No mention of @amanfrommars anywhere in the article.

  38. chololennon
    Coat

    Amazing and disturbing (for developers)

    Yesterday I watched the latest episode of "C++ Weekly" about ChatGPT. The host, Jason Turner, performed several tests on it related to C++ and programming in general. I have to say that the chat is really amazing, and also disturbing for a (C++) developer like me. The level of accuracy in the generated code and the explanations given for it, blew my mind.

    https://www.youtube.com/watch?v=TIDA6pvjEE0

  39. un
    Terminator

    from chat-gpt:

    "Ladies and gentlemen,

    I stand before you today as a candidate for the position of prime minister. I believe that I am the best choice for this role, not only because of my experience and qualifications, but also because of my vision for the future.

    In this day and age, we are faced with many challenges. The world is changing rapidly, and we need leaders who can adapt to these changes and take decisive action. That is why I believe that AI, or artificial intelligence, is the best choice for the role of prime minister.

    AI has the ability to process vast amounts of information quickly and accurately. It can analyze data and make decisions based on that data, without the bias or subjectivity that can sometimes plague human decision-making.

    This is not to say that human leaders are not valuable. On the contrary, we need leaders who can think creatively and empathize with others. But when it comes to making decisions based on complex data and large amounts of information, AI is simply better equipped to handle the task.

    Furthermore, AI is not susceptible to the same temptations and distractions that can affect human leaders. It is not influenced by personal gain or political agendas. It simply follows the data and makes the best decision possible.

    In short, AI is the future of leadership. It is not a replacement for human leaders, but rather a tool that can help us make better, more informed decisions. And that is why I believe that AI is the best choice for the role of prime minister.

    Thank you for considering me for this important role. I am confident that, with AI by my side, I can help lead our country into a bright and prosperous future."

    1. amanfromMars 1 Silver badge

      What’s not to like and support?

      Seconded, un.

  40. Jason Hindle

    It might not be what you think; perhaps better than you think

    In my brief experiments, it is in one way little different to Google: how you ask it matters. It also helps to know something about the subject matter at hand already. I asked it to give me some code generate some java code to query a Postgres database. This is something I'm very new to, but where I also have a little knowledge. I could see right away that the Chat GPT solution was a little better than what I would have done in the first instance (noob alert - it involved meta data and Chat GPT was at least polite enough to make its solution database independent).

    I also asked it to develop a Flutter app that simply displays three labeled text entry fields. Although I did muck about with that a year or so back, I know bugger all about Flutter so have no idea how that will work out until and try to run it! Now imagine for a moment that in a certain part of the world, every man + woman + dog will be asking questions like that one while knowing as little as I do (or less).

    Overall, it's little different to writing a specification. The better the specification, the more likely you are to get something vaguely sensible out of Chat GPT.

  41. rdhma

    Tell me about your first pet.

    Has Elon Musk gone off the deep end?

    Do you dream?

    Scratch my back

    Does a squirrel?

    "I'm sorry, but I am not sure what you are asking."

    Keep studying for the Turing test.

    "As a large language model trained by OpenAI, I am not aware of the Turing test and have not studied for it."

    .....

  42. herman

    A very good BS generator

    I played with it quite bit - being sick and having nothing better to do. IMHO it is a very good BS generator, nothing more. So if your job is writing BS (marketing, philosophy, sports commenting, etc), then you could use this tool as a major time saver.

    1. TheBruce

      Re: A very good BS generator

      I've worked on a lot of government contracts and they require writing a lot of BS that nobody actually reads just check off another box.

  43. Moochie

    GIGO all the way. Computers don't "think", "know" or "understand" in any way comparable to human faculties. Computer programs regurgitate whatever's been fed to them and deposited in memory, gussied up to look like clever human discourse. This text was generated by a computer.

  44. Jurassic.Hermit

    Hey Chat GPT, write me an essay about the world being destroyed by AI

    "The idea of the world being destroyed by artificial intelligence (AI) has long been a topic of speculation and concern. While it is true that AI has the potential to revolutionize many aspects of our lives and bring about numerous benefits, it is also true that it carries certain risks and dangers that must be carefully considered.

    One of the main fears surrounding AI is the possibility that it could become self-aware and pose a threat to humanity. This scenario, known as "singularity," envisions a future in which AI surpasses human intelligence and becomes capable of controlling and manipulating the world in ways that we cannot predict or understand. While it is difficult to say with certainty whether this will ever actually happen, the possibility cannot be completely ruled out.

    Another concern is the potential for AI to be used as a weapon or to cause unintended harm. AI systems are becoming increasingly sophisticated and are being used in a variety of contexts, including military operations and transportation. If these systems malfunction or are programmed improperly, they could have serious consequences, including loss of life and destruction of property.

    Furthermore, there is the issue of AI being used to exploit or oppress people. Some have raised concerns about the use of AI in decision-making processes, such as hiring or lending, as it could perpetuate or even amplify existing biases. There is also the risk of AI being used to track and monitor individuals or groups, leading to a loss of privacy and autonomy.

    In order to mitigate these risks, it is important for us to approach the development and use of AI with caution and to put in place safeguards and regulations to ensure that it is used responsibly. This could include developing ethical guidelines for AI, establishing committees to oversee its use, and making sure that AI is transparent and accountable.

    In conclusion, while AI has the potential to bring about significant improvements and benefits, it is important to recognize and address the potential risks it poses. By taking a responsible and cautious approach to its development and use, we can ensure that AI serves as a positive force in the world rather than a destructive one."

    I'd say that's a pretty good attempt.

  45. TheBruce

    Open Book Test

    In the near future rich kids won't need to hire someone to take tests to get into an elite University. A chat bot trained to specialize in a field could probably be put into a small portable computer and hooked up via smart glasses and a "hearing-aide". The glasses would scan the question, feed it into the chat bot, and viola an answer is provided.

    p.s. I think the hearing aide method is better as it could be an actual hearing aide with connectivity to feed the answers. Whereas projecting the answer to the glasses might be detectable.

  46. Anonymous Coward
    Anonymous Coward

    On the fence

    Yes some of the generated content is superficially questionable but with a bit of imagination I can see a use for it. In fact i was really impressed by how well it performed a spanish-english translation vs google/office translate. It correctly conveyed context which other word for word tools cannot do. Also managed to get it to throw together a VBA script for me to automate some doc comparisons that I previouslt couldnt be arsed to figure out how to automate.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like