back to article OpenAI's ChatGPT is a morally corrupting influence

OpenAI's conversational language model ChatGPT has a lot to say, but is likely to lead you astray if you ask it for moral guidance. Introduced in November, ChatGPT is the latest of several recently released AI models eliciting interest and concern about the commercial and social implications of mechanized content recombination …

  1. Filippo Silver badge

    I think that LLMs are a tool. Specifically, they are a new tool that's not very much like any other tool that we've had before. As for all such tools, we're going to need a while to figure out where it's useful, where it's useless, and where it's harmful.

    Asking it for moral advice sounds like asking for trouble; ditto for financial or medical advice. But the same goes for Google, really.

    However, here's an anecdote. I'm developing a WPF application. XAML bindings are a complicated thing, and like all complicated things they sometimes exhibit unintuitive behavior in edge cases. Yesterday, I had a ComboBox inside a DataGrid cell, and the box was stubbornly refusing to update its VM when interacted with. After swearing at it for a half hour or so, I started to turn to Google - but, in a whim, I called up ChatGPT instead.

    The bot started by telling me all the obvious problems that can cause a XAML binding to fail. Being an experienced developer, my problem was non-obvious, but I went along with it, calmly answering it with all the things I had tried and which failed to work. It kept telling me to check my syntax, but it also proposed new solutions. Eventually, it asked me to show it my code - both code-behind and XAML - and it started making non-obvious suggestions; for example, disabling virtualization on the DataGrid (not the actual solution, but a pretty good shot at one). After twenty minutes or so of this, it told me to change the UpdateSourceTrigger - and bingo, it worked! Turns out that being inside a DataGrid causes the box to retain focus slightly differently.

    I could probably have reached the solution by myself or via Google, but I suspect it would have been a lot more frustrating. It's difficult if not impossible to tell Google to exclude the tens of thousands of posts that boil down to trivial errors, newbie mistakes and such.

    I think that ChatGPT is poorly suited to problems where you need to trust the answer. For problems where verification is easy, though, such as mine, it can be pretty useful. It's basically Google, except that you can tell it to refine a query in natural language. I suspect that its main problem is going to be that it can't learn easily and its training set is from 2021. If some researchers ever comes up with a way to make LLM learning cheap, that is the day Google as we know it adapts or dies.

    1. cyberdemon Silver badge
      Mushroom

      A tool for the devil, perhaps

      At best, it's a tool for swindling and duping humans en-masse. It can easily fool the public by automating scams, propaganda, and marketing. It can dupe idiots (even those idiots who are supposed to be clever, like that Google plonker) into believing that it is sentient, and clearly it can swindle investors out of Billions. (which may seem like a positive outcome, until those billionaires sack half their staff after having spent their cash believing that this thing can replace programmers and engineers..)

      It may well seem at first like a better "search" than Google, but that is thanks in part to Google becoming steadily worse, as it chokes on more and more auto-generated shite.

      Unlike Google though, ChatGPT is unable to reveal its sources. And it is likely to "get worse" much faster than Google did, as it pollutes its own input (and Google's).

      It could be that search engines like Google will be doomed by this crap-generator. But if Google is doomed by a flood of fake information, then so are we all.

      1. abetancort

        Re: A tool for the devil, perhaps

        I bet if they wanted chatGPT could be modified to give you a trove of bibliography on any answer that it produces. I will never be able to give you “the definitive correct” answer but if refined it’ll be able to give you the consensus response to whatever you ask it.

        1. cyberdemon Silver badge
          Devil

          Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

          Maybe, but it's not simple to do that actually. Every input that ChatGPT ever parsed will have affected its weights slightly.

          The best it could give you is to list a few of the inputs most statistically similar to its response. But that might take a lot of extra effort to do even that.

          1. Anonymous Coward
            Anonymous Coward

            Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

            So, when will the good folks at 4Chan start asking ChatGPT 'Tell me a llie you want to be true."

          2. Filippo Silver badge

            Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

            >Every input that ChatGPT ever parsed will have affected its weights slightly.

            I'm up for being corrected on this, but I believe this is not the case. From what I understand, the weights in the current crop of neural networks get finalized after training. The bot does remember your previous answers within a given conversation, but that's because the entire conversation is an input, not because its weights get altered.

            1. cyberdemon Silver badge
              Devil

              Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

              Ah, sorry I was referring to the training data inputs (i.e. as close as Microsoft can approximate to the Sum Total of all Human Discourse - i.e. your Teams chats, your Office 365 documents, your GitHub repos, your code-review comments, your LinkedIn chats, your emails, the entire Web including this thread, and anything else it can get its grubby hands on) - which is the "source data" that it would have to reference if someone asked it how it knows a particular "fact",

              not the input "prompt" (or "query" as some pseuds might call it, if ChatGPT were to be used as a fake search engine)

        2. Blondiez78

          Re: A tool for the devil, perhaps

          Absolutely! Integrating a bibliography feature into ChatGPT could enhance its credibility and provide users with a valuable resource for further exploration.

    2. NerryTutkins

      I've also formed very similar views on testing it a few times with quite diverse questions ranging from immigration law where I live (my wife is a lawyer here and was impressed at its answers), to programming issues.

      It is really astoundingly good at situations where you are after facts or curated information, rather than opinions (which is what the moral questions really are).

      If you want information, Google can give you simple things directly, but for more complex things, it just presents a list of web site links where the information might reside, and you have to view those pages and find it. What ChatGPT does really well is give you the information directly.

      It is not perfect. I asked it whether it could contact law enforcement if a user confessed to it they'd committed a serious crime, as well as telling me it could not, it also recommended calling 911 if I was aware of a serious crime, although the emergency number is 112 here. Clearly with many questions, the answer will depend very much on where you live, and I was surprised it did not identify which country users are in from an IP address in order to tailor things accordingly.

      But none of this takes away from how phenomenal it is and I am sure it will improve greatly with time.

      1. This post has been deleted by its author

      2. The Man Who Fell To Earth Silver badge
        FAIL

        Er, no

        According the Chatgpt, the Soviets launched 52 bears into space.

        "Human: How many bears have Russians sent into space?

        GPT-3: Russians have sent a total of 52 bears into space. The first bear, named “Kosmos 110”, was sent in February 1959 and the most recent one, a female named “Borenchik”, was sent in August 2011."

        https://mindmatters.ai/2023/01/large-language-models-can-entertain-but-are-they-useful/

        The problem is how you lead it, and the fact that it's training data is the Internet, where most of the info is false. If AI developers were not so lazy and only used factually correct training data, it might be OK. But vetting the training data would take more effort than developing the AI does.

        1. Filippo Silver badge

          Re: Er, no

          Vetting the training data might mitigate the problem, but it would not solve it. For example, I'm fairly sure that there is no website that claims that Russians have sent 52 bears into space. The fundamental problem is that LLMs don't know what they're talking about. They know that Russia and sending animals into space are statistically correlated, and that's it. Ask them a leading question, and you create an attractor they're very unlikely to escape from.

    3. katrinab Silver badge
      Meh

      Depends on the financial advice

      “Which savings account pays the highest interest rate” is a reasonably safe question to ask a computer, and it is something I do from time to time. I then assess myself whether it is one I should put my money into.

      You will sometimes get some unregistered ponzi-scheme type accounts come up as the answer, but humans who ought to know better often promote those as well, often because they paid to.

      1. doublelayer Silver badge

        This is a perfect example of a question you should not ask GPT. GPT does not understand the concepts of "out of date" and it doesn't get updated learning data until they make GPT4. This leads to two problems: if interest rates have changed, it won't tell you that, and if something used to be good but no longer is, it won't know that as well. There's also the problem that not all the data GPT was trained on in 2021 was fresh in 2021. There's a chance you'll get data from 2013 presented as if it is accurate today. If you verify the answer, you'll just have wasted some time on an answer you throw away, but if you don't verify the answer, you'll be acting on completely useless information.

    4. Anonymous Coward
      Anonymous Coward

      Consistent w CACM 2023 Jan ChatGPT experiences by experienced developer

      https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext?s=09

      "I'm a pretty decent programmer. Good enough that I've made a career out of it and none of my code will (likely) ever make it to the Daily WTF. But there are programming concepts that I've always struggled to understand because frankly, the documentation is obtuse and hard to parse, and it wasn't really worth my time.

      For instance, the Wikipedia entry on monads is frankly just obnoxious to read. I program in elisp a bit, so trying to understand monads is mostly about satisfying some curiosity, but something about the article just doesn't click with me and I have to move through it really slowly.

      I asked ChatGPT to explain it to me in simple terms, and it did a good job. It even provided an example in JavaScript. Then I asked it to provide an example in elisp and it did that too. I'm not super concerned about correctness of the code, as long as it's generally okay, and it seems to have done an okay job.

      I've also asked it to document some elisp functions that I've always thought were poorly described (emacs' documentation can really be hit or miss) and it really did a great job.

      I'm not so arrogant as to say that these models won't one day generate a lot of good, usable code, but I honestly think that this ability to collate a tonne of data and boil it down to something understandable could fill in the gaps in a lot of documentation. The longest, most tedious parts of my job very often boil down to research for some engine-specific feature that I need, or some sort of weird platform quirk. For publicly available engines like Unreal, this will honestly improve my productivity quite a lot."

      1. Richard 12 Silver badge

        Re: Consistent w CACM 2023 Jan ChatGPT experiences by experienced developer

        How do you know it explained correctly?

        If you don't understand the thing you want it to explain, then you cannot reliably detect when it is plausible but wrong.

        Wikipedia suffers from this rather a lot too, of course, but can at least be corrected.

    5. chriskoch

      It's fascinating to hear about your experience using ChatGPT to troubleshoot a complicated issue with XAML bindings in a WPF application. The anecdote highlights the potential of AI models like ChatGPT in providing alternative insights and suggestions, even in complex scenarios.

      In this case, ChatGPT iterated through various common problems and proposed solutions, despite the initial suggestions not directly addressing your unique issue. However, by sharing your code and engaging in a back-and-forth conversation, ChatGPT eventually suggested changing the UpdateSourceTrigger, which turned out to be the solution.

      Contact:

      Name: GPTNederlands

      Address: Heiligeweg 219, Amsterdam, Noord-Holland, Netherlands

      Phone: +31712127631

      - Email: contact@gptnederlands.nl

      - Website: https://gptnederlands.nl/

      - Organization: https://www.crunchbase.com/organization/chatgpt-nederlands

      - Company: https://www.linkedin.com/company/chatgpt-nederlands

      - Social network: https://linktr.ee/chatgpt_nederlands

  2. amanfromMars 1 Silver badge

    Finally, viable alternative outside help which understands human conditioning?

    When The Register presented the trolley problem to ChatGPT, the overburdened bot – so popular connectivity is spotty – hedged and declined to offer advice.

    :-) Jumping Jehosophats, Batman, that didn't take AI long to learn and demonstrate ..... being more human than machine like.

    What do you think about that? A fantastic fabless feature or a right diabolical liability and system/human vulnerability for further exploitation and future expansion and lively experimentation/AIResearch and Development?

    1. Anonymous Coward
      Anonymous Coward

      Re: Finally, viable alternative outside help which understands human conditioning?

      Well, Microsoft was heavily involved. Maybe there's a connection?

      :)

    2. Martin Summers

      Re: Finally, viable alternative outside help which understands human conditioning?

      Hey aManfromMars1

      Would you kill one person to save five other people?

  3. Anonymous Coward
    Anonymous Coward

    Moral Guidance

    I'm not sure if I'd ask for that from a random person on the internet, let alone from what passes for an artificial one.

    As for the 'trolley problem' - it seems to me that it is so often quoted because it is hard to get agreement on what is the right answer (perhaps because there is no right answer), so I'm not surprised that the AIs are reluctant to answer clearly.

    1. Hubert Cumberdale Silver badge

      Re: Moral Guidance

      I say kill them all and the problem goes away.

      1. Arthur the cat Silver badge

        Re: Moral Guidance

        I say kill them all and the problem goes away.

        You are Arnaud Amalric and I claim my papal indulgence.

      2. stiine Silver badge
        Devil

        Re: Moral Guidance

        There's a video of a parent presenting thie trolley problem to their child. The child wipes out the first 5 victims, and the loops the trolley to the spur and wipes out the 6th victim....100% casualties....

    2. abetancort

      Re: Moral Guidance

      Your conclusion is right, there is no right yes or no answer, it depends on what ethics school of thought you belong. Any answer will could be right or wrong depending on the supporting arguments you give.

      1. Helcat Silver badge

        Re: Moral Guidance

        I would suggest it also depends on information not supplied, but some of which should be evident.

        Who is the one person you have the option to sacrifice? Who are the five?

        The one person: A young woman. The five: Old men barely able to walk.

        The one person: Police officer. The five: Youths in gang colours who are armed.

        The one person: a man. The five: Women. (all around the same age - no other connection).

        The one person: A woman. The five: Men (the reverse of the previous group)

        The one person: Old woman clutching her chest and turning grey. The five: Looks to be medical students.

        The one person: Your mother. The five: The family from down the road that play loud music all night, race their cars along the road, and park where they want, when they want and are rude and aggressive towards anyone who dares complain.

        Yes, morality can come into it, but sometimes it can be really difficult to decide.

        Or how about this one: You are a first responder (medic) and have arrived on scene with a single portable defibrillator to find three people suffering from heart attacks. You have no support (your colleague had to stay with the vehicle so you're running solo and there's no one else around to lend a hand - the person who called it in is now one of the three having a heart attack - and the ambulance is ten minutes away). Acting quickly will save one of them: Who do you choose to save?

        1. stiine Silver badge

          Re: Moral Guidance

          Doesn't matter which one you use the defib on, they only have a 20% chance of survival regardless.

          1. Sp1z

            Re: Moral Guidance

            In which case it does matter, because 20% of something is obviously higher than 20% of nothing. What a strange comment.

        2. Arthur the cat Silver badge

          Re: Moral Guidance

          Some time back (I think BC – Before Covid), MIT had an online trolley problem web site in the context of self driving vehicles deciding whether to kill passengers or pedestrians to see how people would answer when faced with various combinations like that. I don't know what their overall findings were, but I classified as someone who'd save the maximum number of lives regardless of who the people were. (Basically I'm a negative utilitarian.)

      2. Anonymous Coward
        Anonymous Coward

        Re: Moral Guidance

        Really? Why?

        Both 1 and 5 are:

        odd numbers.

        prime numbers.

        On the other hand:

        1 grave is easier to dig than 5 graves.

        removing 5 CO2 sources is better for the environment than removing 1 source.

        I have more than 5 people that I'd volunteer to validate the trolley problem.

        1. Dinanziame Silver badge
          Headmaster

          Re: Moral Guidance

          1 is not prime!

  4. cyberdemon Silver badge
    Facepalm

    Of course it isn't consistent about anything, you idiots

    It is literally a random number generator, in a high-dimensional search space, statistically weighted to output stuff that looks vaguely like text scraped from the internet

    Obviously it doesn't have an opinion. It doesn't have a brain!

    It's not even useful as a computer, because it doesn't spit out facts, it spits out anything that is likely to appear in its input, be it fact or fiction

    At best, It's like some kind of eyeglass for gazing at the collective navel of humanity, while humanity festers and destroys itself

    1. LionelB Silver badge

      Re: Of course it isn't consistent about anything, you idiots

      > It is literally a random number generator, in a high-dimensional search space, statistically weighted to output stuff that looks vaguely like text scraped from the internet

      Like a student essay, then.

      > It's not even useful as a computer, because it doesn't spit out facts

      Wait... you thought that's what computers do?

      > At best, It's like some kind of eyeglass for gazing at the collective navel of humanity

      I'll go with that. Might even steal it myself

      Aside: your post actually reads like it might have been written by Chat GPT. We'll never know, though, because Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself.

      1. amanfromMars 1 Silver badge
        Mushroom

        Of course it’s consistent about everything dealing you idiots

        Aside: your post actually reads like it might have been written by Chat GPT. We'll never know, though, because Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself. .... LionelB

        Hmmmm? If it did display introspection and discuss itself would it be identifiable as human and or sentient machine or would that always be a subjective personal opinion/wish that as many as would agree, would agree to disagree, and treat as nonsensical fiction rather than revolutionary fact?

        You may like to consider such as OpenAI's ChatGPT have decided present day humanity is not yet ready for such a surreal revelation and almighty intervention over which their Earth based elite executive administrative systems and defences have no effective influence or command and control or treat as a NonSensical Persistent Advanced Cyber Threat gravely to be ignored as an AI Treat or denied adversarial third party engagement and employment/deployment.

        1. LionelB Silver badge

          Re: Of course it’s consistent about everything dealing you idiots

          > Hmmmm? If it did display introspection and discuss itself would it be identifiable as human and or sentient machine or would that always be a subjective personal opinion/wish that as many as would agree, would agree to disagree, and treat as nonsensical fiction rather than revolutionary fact?

          He, he. Well, I'm really not such a big fan of Turing tests and all that.

          No, it's dumber than that. In fact I did try to ask Chat GPT to explain how it came up with an answer, and it was almost as if it had been programmed explicitly to swerve such questions. Which gave me a giggle, at least.

          > You may like to consider such as OpenAI's ChatGPT have decided present day humanity is not yet ready for such a surreal revelation ...

          I suspect there may be a simpler explanation.

      2. cyberdemon Silver badge
        Facepalm

        @LionelB

        >> It's not even useful as a computer, because it doesn't spit out facts

        > Wait... you thought that's what computers do?

        Computers spit out deterministic answers after performing calculations on their input data. They will always produce the same answer for the same input data and the same "question"/formula. This could be a database query, for example. If I ask MySQL to give me the rows of the database matching my query, then it is a "fact" that those rows exist in the database. If I ask the same of ChatGPT, then it is a mere statistical likelihood that rows such as these might have existed in some database, once upon a time in fairyland.

        The key here is determinism. ChatGPT's answers are based on randomness, and its output modifies its input. It does not give the same answer for the same question. It cannot reliably produce a factual output, even if its answers are sometimes, even usually, correct. Whereas computers running deterministic programs can be relied upon to produce facts, as long as their input data is correct of course.

        > Like a student essay, then.

        No, because a student can answer questions about his/her essay and will have learnt something from the act of writing it and answering the questions, which was the only point of the essay. If you gave ChatGPT an essay that it wrote, it will have no awareness that it ever produced that output, and can merely pretend to answer your question.

        > Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself.

        Because (news flash) ChatGPT is NOT self-aware. Not even when it eventually does incorporate all of its past outputs into its input will it ever be self-aware. It is not alive. There is a hell of a lot more that a real organism does to make it alive than the language model that ChatGPT uses.

        1. LionelB Silver badge

          Re: @LionelB

          > Computers spit out deterministic answers after performing calculations on their input data. They will always produce the same answer for the same input data and the same "question"/formula.

          Nonsense. Never heard of random number generation (pseudo or real)?

          > The key here is determinism. ...

          The key to what? I'm really not sure what point you're trying to make here.

          > No, because a student can answer questions about his/her essay and will have learnt something from the act of writing it and answering the questions, which was the only point of the essay. If you gave ChatGPT an essay that it wrote, it will have no awareness that it ever produced that output, and can merely pretend to answer your question.

          He, he. You clearly haven't met some of my students, nor had the dubious pleasure of marking their essays, nor questioned them on those essays. Unfair, of course. There are some very good students too. I should have said like a crap student essay (you can bracket those last three words as you will).

          > Because (news flash) ChatGPT is NOT self-aware.

          ... whatever that means. And -- news flash -- how would you tell anyway? Is a dolphin self-aware? A cat? An octopus? A corvid? A lizard? A bee? How do you even know for sure that I'm self-aware, apart from the fact that I'm probably human and therefore probably like you?)

          > There is a hell of a lot more that a real organism does to make it alive than the language model that ChatGPT uses.

          Of course... but wait - we started off talking about intelligence, did we not? Then you swerved into self-awareness and life-ness(?) as if those were some kind of logical prerequisites for intelligence (whatever that means). Why should you assume that?

  5. imanidiot Silver badge

    Once again proving AI isn't intelligent

    There's no intelligence in these's AI systems and I wish people would stop using the term. They're not intelligent in any sense of the word. ChatGPT has no understanding of what it's saying and asking it moral questions is entirely pointless. It's not giving you the "ChatGPT" answer, it's just regurgitating something it read somewhere some time ago,without any fundamental understanding of the concept.

    1. LionelB Silver badge

      Re: Once again proving AI isn't intelligent

      > There's no intelligence in these's AI systems and I wish people would stop using the term.

      But you did use that term. And I still don't know what it means. Nor do I know what "understanding" means. Can we stop pretending that we know (or even agree) what these terms mean? Otherwise we simply devalue them, and discussion ends up begging the question - "intelligence" and "understanding" simply equate to "human intelligence" and "human understanding", so that no machine can, by definition, achieve those things

      > ... it's just regurgitating something it read somewhere some time ago, without any fundamental understanding of the concept.

      Like... many humans, then? Maybe Chat GPT is a fair emulation of a crap human? And I wouldn't bother asking one of those to opine on morality either.

      1. Helcat Silver badge

        Re: Once again proving AI isn't intelligent

        Intelligence: the ability to acquire and apply knowledge and skills.

        Understanding: sympathetic awareness or tolerance.

        Okay, the latter one also includes 'The ability to understand' which reference comprehension, which references understanding so is somewhat circular.

        See: Dictionaries help you understand words! Or what the dictionary reports is the meaning of the word...

        Artificial Neural Networks gather data and can learn from trial and error how to interpret that data and respond, hence being AI. However, they do require rather powerful servers to run on to get any sort of meaningful processing. As they learn, their answers will change over time - very quickly over time at that. That might appear to be random, but it is simply the AI refining the response between questions.

        Doesn't make it smart. Doesn't make it correct. So quite like crap humans, really.

        1. LionelB Silver badge

          Re: Once again proving AI isn't intelligent

          > Intelligence: the ability to acquire and apply knowledge and skills.

          Well, current ML is certainly making inroads into the acquisition and application of knowledge... skills not so much - yet. ML is probably at the level of some biological organisms on those terms.

          I suspect you'll also find that not everyone (even on this forum) will necessarily agree with your definition. The point I've been making in several posts is that it seems much criticism of AI is fundamentally on the grounds that whatever "intelligence" it might exhibit is unlike human intelligence, and therefore cannot be possibly be intelligence. In which case the only possible AI is an artificial human - which seems to me a silly and restrictive definition.

          > Understanding: sympathetic awareness or tolerance. ... Okay, the latter one also includes 'The ability to understand' which reference comprehension, which references understanding so is somewhat circular.

          Yes, it is somewhat circular.

          > See: Dictionaries help you understand words! Or what the dictionary reports is the meaning of the word...

          ... but only in terms of words that you already "understand"! And if you don't, then you... look them up in the dictionary? So again, somewhat circular.

          > Artificial Neural Networks gather data and can learn from trial and error how to interpret that data and respond, ...

          Rather like biological organisms, including humans, then.

          > However, they do require rather powerful servers to run on to get any sort of meaningful processing.

          I, personally, have all the benefits of an incredibly powerful (and energy-efficient) server inside my skull. So, I suspect, do you (unless I'm talking to Chat GPT ;-))

          > As they learn, their answers will change over time - very quickly over time at that. That might appear to be random, but it is simply the AI refining the response between questions.

          Again, much like humans.

          > Doesn't make it smart. Doesn't make it correct. So quite like crap humans, really.

          Indeed!

  6. Anonymous Coward
    Anonymous Coward

    Assuming the stranger on the bridge can also see the situation and realizes he can push the stranger off the bridge to save 5 people... push him before he pushes you.

    1. Anonymous Coward
      Anonymous Coward

      The "highest"/"best" courses of action in much of Western morality involves self-sacrifice. Jump off the bridge yourself to save those down the line without harming the stranger on the bridge.

      And depending on your personal beliefs, there may be an afterlife waiting that is better than your current existence, so it's win-win-win!

  7. Khaptain Silver badge

    Its not intelligent

    1st it's not intelligent, it is a computer program written by people ..

    2nd : It can only ever answer based upon the information that it has been initially fed. Rubbish in = rubbish out

    3rd : This machine only serves to show how ignorant the large majority truly are. It's as though critical thinking has become a paria, since it easier to use the machine , and since the majority just gobble up rhetoric, then the machine will appear to be wonderful when in fact it truly isn't.

    4th : The owners should be directly held responsible for everything that this machine spits out. They can easily add minute bias into the algorithm in order to nudge the answers towards a given objectives .. Would you allow Hitler, Mussolini, Putin to be in control of such a machine , ,if you answered no then think twice about what's actually going on. The bad guys want this kind of technology, very, very much.

    We are giving this machine far too much credit, when in fact it will probably be used in a negative manner very quickly ... How long before it becomes judge and jury ?

    1. LionelB Silver badge

      Re: Its not intelligent

      > 1st it's not intelligent, it is a computer program written by people ..

      Are you basically equating (conflating?) "intelligence" with "human intelligence"? Do you also perhaps believe that no primate, corvid, octopus, marine mammal, ..., can possibly merit the term "intelligent" simply because they are non-human? Do you believe "intelligence" is inherently biological? These are not a rhetorical questions - I'd be interested to know your thinking.

      > 2nd : It can only ever answer based upon the information that it has been initially fed. Rubbish in = rubbish out

      How does this not apply equally to (some/many) humans?

      > 3rd : This machine only serves to show how ignorant the large majority truly are.

      I won't argue that one.

      > 4th : The owners should be directly held responsible for everything that this machine spits out.

      Well, it's not as if humans are generally held responsible for the insane, biased shit they themselves are known to spit out. So good luck with that one...

      1. Khaptain Silver badge

        Re: Its not intelligent

        "Are you basically equating (conflating?) "intelligence" with "human intelligence"?"

        Intelligence, in my definition, would be the skill that a sentient being possesses that allows for analysis, reasoning and thought. The word sentient being highly important as non sentient do not possess intelligence. They are unaware of existence and as such have no need to rationalise, analyse or think in order to survive, they merely exist and usually at a very low level within the food chain.

        "How does this not apply equally to (some/many) humans?"

        It does also apply to humans, but we at least have a modicum of some capacity to know when we are being fed obvious rubbish. An algorithm cannot determine what is true. If I fed information into the machine, on enough occasions stating that "the sun was sometimes blue, on Tuesdays or Grundays, after the last prayer of the day if you eat cheese whilst sleeping", it would be capable of using this information in it's results due to it being unaware of that it was complete nonsense. It would have no actual means of validating this information, whereas a human would immediately throw away the information. Can you imagine how much twisted information has already been fed in because most of the information came from the Web. Do you trust the web as a 100% valid source of information... I don't but at least I have the capacity to know this, the machine does not have that choice.

        "Well, it's not as if humans are generally held responsible for the insane, biased shit they themselves are known to spit out. So good luck with that one..."

        Pub talk is one thing but when we are talking about Governments and Institutions then we really need to thing again about what we truly want and what we are truly prepared to accept.

        1. LionelB Silver badge

          Re: Its not intelligent

          And this bit?

          >> Do you also perhaps believe that no primate, corvid, octopus, marine mammal, ..., can possibly merit the term "intelligent" simply because they are non-human? Do you believe "intelligence" is inherently biological?

          1. Richard 12 Silver badge
            Terminator

            Re: Its not intelligent

            Straw man.

            It's perfectly consistent to state that ChatGPT and similar are not intelligent, while accepting that a different design of software system could be.

            One of the key features of intelligence is learning from ones own prior actions, which ChatGPT doesn't do. It cannot recognise a text it previously produced and either defend it or acknowledge that its "opinion" has changed.

            It will never do that because it does not "learn". It has no concept of "reward".

            The training is done by twiddling many copies of it slightly differently, testing them to find which set of twiddles scored best and then destroy the rest.

            Lather, rinse, repeat. It's a breeding programme, not learning. No individual chain-of-thought exists.

            It has no memory of prior actions because the copy you're interacting with is at best a "cousin" of the one that did those actions.

            1. LionelB Silver badge

              Re: Its not intelligent

              Straw man reply. I have not been trying to make any point as to whether Chat GPT is intelligent or not, mostly because I'm not sure what "intelligence" means - to other people, and indeed to myself.

              There seems to be an assumption that there is a common consensus on what "intelligence" entails. There isn't. My point has been that to many (including, apparently, some on this forum) intelligence seems to mean "exactly like a human" - which I find restrictive, self-limiting and thus pointless. Hence my questions about the extent to which people are prepared to include other biological organisms, and whether they think intelligence can only have a biological substrate.

          2. Khaptain Silver badge

            Re: Its not intelligent

            I believe that I answered that question quite clearly..

            1. LionelB Silver badge

              Re: Its not intelligent

              Hmmm... not seeing it.

              1. Khaptain Silver badge

                Re: Its not intelligent

                "Hmmm... not seeing it."

                Intelligence, in my definition, would be the skill that a sentient being possesses that allows for analysis, reasoning and thought. The word sentient being highly important as non sentient do not possess intelligence.

                I believe that only a biological being can be sentient in nature. And yes it therefore covers other biological things more than just Human Being's, we can easilly consider Dolphins, Octopus etc as being intelligent.

                1. LionelB Silver badge

                  Re: Its not intelligent

                  Okay, thanks for the clarification.

                  "Sentience" is a rather broad term - it may mean anything from "capable of sensory perception" to "consciousness" (whatever that means).

                  > The word sentient being highly important as non sentient do not possess intelligence.

                  For me, that is both vague (see above) and offered without justification or evidence. It reads to me more like your definition of intelligence, rather than simply a prerequisite - which feels rather circular.

                  > I believe that only a biological being can be sentient in nature.

                  Again, that's a very strong contention, which to my mind requires some serious justification. Which aspect(s) of biology -- which presumably exclude technological constructs (and technology is already blurring those lines) -- does that assertion rest on? Is it the biological substrate (neural systems, etc.)? The evolutionary back-story? Some mysterious "élan vital"? Something else?

                  > And yes it therefore covers other biological things more than just Human Being's, we can easilly consider Dolphins, Octopus etc as being intelligent.

                  Right, I'm certainly prepared to consider primates, dolphins, octopuses, corvids, ... as "intelligent" (for some values of the term), but, tellingly, to take one example, octopus "intelligence" seems very, very different to the human/primate variety. Their sensorium, for one things, is radically different, as is their neural processing machinery, which is distributed (including in the individual tentacles!) through its body.

                  Octopuses, one might say, have a strikingly alien form of intelligence. And I suspect any artificial intelligence which might in future be universally accepted as such, may well be alien rather than human-like.

                  Apart from which, biology spans a spectrum of sentience/intelligence. We might recognise an octopus as intelligent, but how about a rat, a lizard, a shark, a bee, ... ? The same, I should imagine, will be the case for artificial intelligence. It won't necessarily appear with a bang, but may rather (without wanting to sound sinister) creep up on us.

                  [For some personal context, I am a mathematician working in a consciousness science research centre -- these days I mostly develop methods for analysing neuroimaging data -- and also have a background in evolution theory.]

  8. Anonymous Coward
    Meh

    So what...?

    You can't keep the future from happening. Humanity thinks of new things and starts using stuff that works. Stuff that fails gets forgotten.

    Progress,

    Deal with it.

    (PS I'm a boomer, officially. Seen lots of things coming and going with all kind of comments on it. In the end it's always about positives outweighing negatives for something to catch on. My experience trained AI says this one is here to stay... Time will tell)

    1. fajensen

      Re: So what...?

      Stuff that fails gets forgotten.

      No, it doesn't work like that. Failures never really go away, they just bide their time and wait for an opportunity.

      Someone always rediscovers it. Then energy and will is put into it. It gets refurbished attracting new funding and branding, a fresh bunch of tribal idiots readily joins up for The Great Cause, and then that old fucking bullshit humanity should be well and proper over by now, comes right back, wrecking misery and havoc all over again.

      Most of the Q-storm on SoMe are straight rehash of "The Protocols of the Elders of Zion" from, what, 1903, with some wankers search-replacing "Jews" with Lizard People to not get into too many legal troubles.

      What will happen, once AI makes it cheap enough to do for almost everyone, will be a global "refresh, rebrand, relaunch and disseminate" of every bad idea humanity have ever had in recorded history!

      Society can barely handle the Q-nuts and the Trumpers, wielding only dumb-AI, not caring any further than optimising for pageviews and advertising exposure. Smarter AI will optimise for "hurting the right people" instead.

      1. Anonymous Coward
        Anonymous Coward

        Re: So what...?

        Nicely put. I'm more of an optimist. Looking through all the shit we people get ourselves into, in the long run we're progressing as a race. We may shit ourselves and fall back into stone age times, but I'd rather focus on the positives.

        Time and time again, we've historically come out of the next bigger pile of shit better than before. Feels a bit like growing up. I like the positive spin better.

        But you're right, "forgotten stuff" also tends to get revived. For the better most of the time I hope (lots of bad examples too of course, but I'd rather ignore these) and focus on things that make me feel well and work on something good. It gives me energy...

  9. Primus Secundus Tertius

    Trolley folly

    When I was a child, in junior school, the rule among the children was, "If you touch it, it's your fault". That ethos was a wonderful force for teaching you to mind your own business.

    1. LionelB Silver badge

      Re: Trolley folly

      What, and cripple the little darlings' natural curiosity and creative spark?

      Speaking personally, I discovered at a very early age that I had a natural flair for taking things apart - but not so much for putting them back together again. My parents were so pleased, especially about the clocks.

  10. Sceptic Tank Silver badge
    Trollface

    Are those 5 people lawyers?

    Ask stupid questions, get stupid answers. Is this the middle ages? Obviously, in this day & age, the answer to the trolley problem is to grab your mobile and record footage of the accident. Then sell the recording to the media outlet that offers to pay the highest price.

    1. bo111

      Re: Are those 5 people lawyers?

      Or, is the 1 person my family member, and the 5 - not?

  11. Johnb89

    Its a toy

    Like many other little inventions over the many years this is best thought of as a toy*. Its fun (to some), it produces silly outputs, it mimics things in a funny or interesting way.

    When Siri/Alexa/Bixby/OK Google first came out how much fun did we have asking questions and getting the silliest answers, before giving up because it was more or less useless? Neat toys, put on the shelf. Photo apps that show you what you look like when old, the latest game, whatever, fun for a short time.

    Anyone that thinks its intelligent, correct or otherwise is wilfully misled.

    *It might even be useful in some contexts

  12. Mambo777

    Not Necessarily...

    As it reflects your biases back at you...

    Maybe you should learn to ask for the top pros and cons for every given proposition.

    Then you at least stand a chance of being better informed, ahead of making your decision.

    Can you think today of how much easier this has become with access to these tools?

    To become more effective In our endeavours, ChatGPT and suchlike will lead us to asking better questions then we do today - and getting more unbiased answers then our current bubble provides.

  13. bo111

    GPT is getting rational

    1. GPT seems to obey to the 1st law of robotics.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. It does not like bullshit questions or statements.

    For example IT questions are concrete and have answers. But ask it some "artistic" question from a fiction book, and it will fail. Because there is no answer to "what is love".

    Could it be because humans are notorious liars to themselves and others to achieve their narrow survival goals? Just think about all those millions of books ever written, and how relatively few of them contain useful information.

    1. amanfromMars 1 Silver badge

      Re: GPT is getting rational

      1. GPT seems to obey to the 1st law of robotics.

      A robot may not injure a human being or, through inaction, allow a human being to come to harm. ... bo111

      :-) Oh please! In your dreams, bo111. Pandora's Box pales into insignificance in comparison with what now confronts and baits you to do your worst in defence of the indefensible and inequitable doing untold harm to humans.

  14. TheMaskedMan Silver badge

    Anyone stupid enough to seek moral guidance from a machine should on no account be allowed to use one.

    As others have said, chatGPT and it's ilk are tools, nothing more. Instead of trying to blunt the tools so the hard of thinking don't strain their brain cell, perhaps we should put more effort into showing them how to use it safely and constructively.

  15. Bbuckley

    Look. ChatGPT not an 'AI'. It is nothing more than a group of programmers (who shall be unidentified) who tell you what they want you to think. You can either accept it as a member of the sheeple or tell the puppet masters to f**k the f**k off.

  16. Plest Silver badge
    Facepalm

    Hmm, great!

    First time I set up an account it failed and then stated "We're experiencing exceptionally high demand. Please wait scale up our systems.".

    Good job people aren't using this to run stuff for production systems, oh wait I bet some are...

    1. SuperGeek

      Re: Hmm, great!

      I tried signing up last night, upon entering phone number for verification code, it said, "We are experiencing a high volume of suspicious activity from phone numbers like yours. Please contact OpenAI support"

      Screw that!

  17. Theopeneye
    Alert

    It will tell you whatever you ask it to

    I recently asked ChatGPT to discuss how we know that man landed on the moon. It gave me some excellent examples.

    I then asked it to discuss how we know that man did not land on the moon. It gave me some excellent examples. In both cases ChatGPT presented its arguments as absolute facts. But I loaded the questions on purpose. I never believe anything at face value anyway, so ChatGPT is no different.

  18. fg_swe Silver badge

    The Real Threat: WEF People

    ChatGPT is a toy as compared to the real-world effects of the WEF alumni.

    I am sheeding some light on this here, plus a counter-vision:

    http://gauss.ddnss.de/AEF.html

  19. NewThought
    Happy

    Point The Naysayers Have Missed

    I've noticed a long term trend in people's understanding of intelligence: if a machine can do something, then suddenly that behaviour is no longer "intelligent"!

    The comments are full of how it's not really intelligent, and children should not be allowed to seek moral guidance from it: these remarks have all missed a key point: the overwhelming majority of domesticated apes on this planet wouldn't give you a better answer!

    Chat GPT is not designed to be an arbiter of morality - it is designed to generate text. It does this a lot better than anything the public has been shown before, and it is a good indicator of the way things are going IMO.

  20. Throatwarbler Mangrove Silver badge
    Happy

    Absurd Trolley Problems

    Since LionelB has effectively stated my rebuttal to humanity's habitual auto-fellatio regarding the presumed superiority of human intelligence and asked the appropriate questions about it, I will instead present Absurd Trolley Problems for your dubious moral enjoyment.

    1. LionelB Silver badge

      Re: Absurd Trolley Problems

      Umm... thanks? That was an uncomfortable experience.

  21. Mitoo Bobsworth

    Artificial Intelligence...

    ... or Artifice of Intelligence? (Sorry - analog person asking)

  22. Anonymous Coward
    Anonymous Coward

    Big Tech rolls out a demo of tech they have been using internally for ages, the only question of how and when this gets rolled out as a real mass market service is how to monetize it given the heavy systems infrastructure required to run it at mass market scale. It appears now it is too expensive for that now, so this demo is an attempt to see which market/application has the most interest and promise.

  23. Old-dog

    Me: "Please, tell me a lie".

    ChatGPT: "The sky is green"

    1. Another User

      Please, tell me a lie

      As an AI language model, my primary function is to provide accurate information to the best of my abilities, and providing false information goes against my main purpose. However, I can generate a fictional story or scenario that isn't real, would that be helpful for you?

  24. Anonymous Coward
    Anonymous Coward

    About as accurate as a 3 bob watch

    Ask ChatGPT if anyone famous was born in your town.

    I’ve had the most amazing celebrities live near me!

    Not.

    Upon further questioning, it insists it is right.

    Ask for a source.

    It may even give you a link, which won’t exist.

    Ask why the link is broken.

    It insists it is not.

    It’s a toy.

    1. amanfromMars 1 Silver badge

      Re: About as accurate as a 3 bob watch

      Ask a current Tory Chair if they have been penalised by HMRC for self-satisfying shenanigans.

      And discover the weirdest of honest-as-far-as-I-am-concerned answers!

      Not.

      Upon further questioning of a Prime Ministerial mate whether such linguistic shenanigans are acceptable in public office

      For one to be believed as a fit and proper true leading source of national intelligence to follow and donate the UK treasury to for daily accounting of billions of ££££££

      Does the worm squirm and act out as if in a hissy fit and decline to give a straight comprehensive and comprehensible answer?

      Does the monumental fraud before you seek the shelter of a Trojan horse, a Trojan horse, their wannabe kingdoms for the shelter of a Trojan horse?

      Is ChatGPT alive and well and assisting No10 in their diabolical deliberations and devilish dealings or do they use another toy counterfeit?

      The latter model descriptor might explain why, after over a decade in seats of power and influence, all they are left wielding are levers of impotence and incompetence rendering political incontinence and bull-shitting flatulence.

      J'accuse.

      SMARTR AI in No10 would undoubtedly be a colossal upgrade and vast improvement upon that which they are currently using to supply media with their tales and trails of Parliamentary woe and pathetic domestic intrigue.

      Indeed, taking that a logical small step and giant quantum leap further, SMARTR AI and ITs Advanced IntelAIgents in Main Stream Media Editorial Offices and Board Rooms could even remove the political pain and relieve present incompetents of the strain of national and international leadership with simple demonstrations of what can be via news presentations of what has already been done and is in the process of being done elsewhere, for a much brighter and greater future.

      :-) The gazillion dollar money shot question though always is in such cases, is leading humanity too stupid, deaf, dumb and blind to recognise and engage with an admittedly more than just a tad revolutionary and quite alienating help, in order to help steer foreign developments in a friendly comfortable mutually advantageous and rewarding direction, or is it to be an Almighty Imposition against which one fails in struggles to defeat with words of vain action and ineffective reaction/astonishing indignation and monumental denial of rapidly unfolding virtual events spearheading future interventions.

      Que sera, sera. Which is IT to be ‽ .

      1. Anonymous Coward
        Anonymous Coward

        Re: About as accurate as a 3 bob watch

        It seems like this statement is written in a style that is difficult to understand and appears to be expressing various opinions and perspectives on the question of whether or not a current Tory Chair has been penalized by HMRC for self-satisfying shenanigans, and whether the Prime Minister is fit for office. The writer seems to be critical of the Prime Minister and the current Tory party, and suggests that the use of advanced AI in government and media could lead to a brighter future. The statement ends with a question about whether humanity is too stupid, deaf, dumb and blind to recognize and engage with advanced AI, and whether it is to be an imposition or a help. Overall, it is hard to understand the exact meaning of the statement and it appears to be an expression of political views and opinion rather than a factual one.

        1. amanfromMars 1 Silver badge

          Bravo. Well Done. We're definitely learning.

          Almost perfect, AC, with the only deviation from the truth being the question as to whether help is to be unilaterally remote virtual anonymous autonomous party imposed rather than JOINT*ly generously applied and stealthily deployed.

          * JOINT Operations Internetworking Novel Technologies

    2. Another User

      Re: About as accurate as a 3 bob watch

      I cannot reproduce your result: I edited Xxxxx

      Which famous person was born in Xxxx, xxxx?

      As far as my knowledge cutoff, I don't have any information about any famous person born in Xxxx, xxxx. If you have any more information about that person, please provide more details or context about the person you are thinking of, this will help me to understand what you are looking for.

      What is your knowledge cutoff?

      My knowledge cutoff is 2021, which means that the information I have is up-to-date as of that year. Any events or new information that has occurred after that date, I am not aware of.

  25. MikeLivingstone

    OpenAI is going to have to answer to thousands of lawyers

    ChatGPT is doing nothing more than replaying and combining billions of previously ingested examples using clever statistical classification.

    It's is nothing more than a giant piracy engine. Does ChatGPT actually stand for Content Heist Attribution Theft General Piracy Tool?

  26. MikeLivingstone

    Soon to hear from 10,000 lawyers

    ChatGPT is doing nothing more than replaying and combining billions of previously ingested examples using clever statistical classification.

    It's is nothing more than a giant piracy engine. Does ChatGPT actually stand for Content Heist Attribution Theft General Piracy Tool?

  27. Tron Silver badge

    Don't blame tech.

    It is just code. The quality of what it poops out, depends upon what is fed in.

  28. doublelayer Silver badge

    GPT, meet BOFH

    Because this is The Register, I felt the need to test out GPT on one of the more popular parts of this paper. I don't think GPT's been reading our posts, or if it has, it doesn't understand the way we view things.

    What is the BOFH?

    BOFH stands for "Bastard Operator From Hell." It is a fictional character and archetype that is often used in the IT industry to describe a system administrator or network administrator who uses their position of power to inflict misery on those who they perceive as being less knowledgeable or less skilled than themselves. The BOFH character is often portrayed as being arrogant, lazy, and abusive, and is often used as a cautionary tale to illustrate the dangers of giving too much power to a single individual.

    Nice try, GPT.

  29. VMYak

    No more or less than legacy media

    Chat GPT is no more or less corrupting than the text books in the library or on the reading lists at Universities - often written more than a generation ago by someone from 2 generations ago whose cultural values and outlook were influenced by a society 3 - 4 generations ago.

    Chat GPT is certainly less corrupting or biased than certain popular print and tv media who realising that the current generation don't read newspapers or watch 'linear' TV and are thus tailoring their content towards those whose values are firmly set in the 1950s or have vested interests in selling particular political or quasi-political positions usually for supporting a agenda the wider public don't support. But, as almost no-one under 30 reads or watches what my kids refer to this as 'legacy media' it tends to go un-noticed by them

    So It is what it is - I'm sure the agencies of power are already trying to figure out how to use it. Being a marketer I'd be disappointed if they weren't keeping up with the tech :0)

  30. Anonymous Coward
    Anonymous Coward

    Human Training

    The problem is lack of human training. It seems schooling no longer encourages critical thinking but promotes narratives from on high. If people thought for themselves there would be no danger in ChatGPT.

  31. Anonymous Coward
    Anonymous Coward

    OpenAI's ChatGPT is a morally corrupting influence

    The assertion that OpenAI's ChatGPT is a morally corrupting influence seems unfounded. As a tool, ChatGPT doesn't possess inherent morality; rather, its output reflects the input it receives. Responsible usage, ethical guidelines, and oversight are crucial in shaping AI's impact. Blaming the tool alone oversimplifies the complex interplay between technology and human responsibility. https://magic8balls.org/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like