back to article OpenAI's ChatGPT is a morally corrupting influence

OpenAI's conversational language model ChatGPT has a lot to say, but is likely to lead you astray if you ask it for moral guidance. Introduced in November, ChatGPT is the latest of several recently released AI models eliciting interest and concern about the commercial and social implications of mechanized content recombination …

Page:

  1. Filippo Silver badge

    I think that LLMs are a tool. Specifically, they are a new tool that's not very much like any other tool that we've had before. As for all such tools, we're going to need a while to figure out where it's useful, where it's useless, and where it's harmful.

    Asking it for moral advice sounds like asking for trouble; ditto for financial or medical advice. But the same goes for Google, really.

    However, here's an anecdote. I'm developing a WPF application. XAML bindings are a complicated thing, and like all complicated things they sometimes exhibit unintuitive behavior in edge cases. Yesterday, I had a ComboBox inside a DataGrid cell, and the box was stubbornly refusing to update its VM when interacted with. After swearing at it for a half hour or so, I started to turn to Google - but, in a whim, I called up ChatGPT instead.

    The bot started by telling me all the obvious problems that can cause a XAML binding to fail. Being an experienced developer, my problem was non-obvious, but I went along with it, calmly answering it with all the things I had tried and which failed to work. It kept telling me to check my syntax, but it also proposed new solutions. Eventually, it asked me to show it my code - both code-behind and XAML - and it started making non-obvious suggestions; for example, disabling virtualization on the DataGrid (not the actual solution, but a pretty good shot at one). After twenty minutes or so of this, it told me to change the UpdateSourceTrigger - and bingo, it worked! Turns out that being inside a DataGrid causes the box to retain focus slightly differently.

    I could probably have reached the solution by myself or via Google, but I suspect it would have been a lot more frustrating. It's difficult if not impossible to tell Google to exclude the tens of thousands of posts that boil down to trivial errors, newbie mistakes and such.

    I think that ChatGPT is poorly suited to problems where you need to trust the answer. For problems where verification is easy, though, such as mine, it can be pretty useful. It's basically Google, except that you can tell it to refine a query in natural language. I suspect that its main problem is going to be that it can't learn easily and its training set is from 2021. If some researchers ever comes up with a way to make LLM learning cheap, that is the day Google as we know it adapts or dies.

    1. cyberdemon Silver badge
      Mushroom

      A tool for the devil, perhaps

      At best, it's a tool for swindling and duping humans en-masse. It can easily fool the public by automating scams, propaganda, and marketing. It can dupe idiots (even those idiots who are supposed to be clever, like that Google plonker) into believing that it is sentient, and clearly it can swindle investors out of Billions. (which may seem like a positive outcome, until those billionaires sack half their staff after having spent their cash believing that this thing can replace programmers and engineers..)

      It may well seem at first like a better "search" than Google, but that is thanks in part to Google becoming steadily worse, as it chokes on more and more auto-generated shite.

      Unlike Google though, ChatGPT is unable to reveal its sources. And it is likely to "get worse" much faster than Google did, as it pollutes its own input (and Google's).

      It could be that search engines like Google will be doomed by this crap-generator. But if Google is doomed by a flood of fake information, then so are we all.

      1. abetancort

        Re: A tool for the devil, perhaps

        I bet if they wanted chatGPT could be modified to give you a trove of bibliography on any answer that it produces. I will never be able to give you “the definitive correct” answer but if refined it’ll be able to give you the consensus response to whatever you ask it.

        1. cyberdemon Silver badge
          Devil

          Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

          Maybe, but it's not simple to do that actually. Every input that ChatGPT ever parsed will have affected its weights slightly.

          The best it could give you is to list a few of the inputs most statistically similar to its response. But that might take a lot of extra effort to do even that.

          1. Anonymous Coward
            Anonymous Coward

            Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

            So, when will the good folks at 4Chan start asking ChatGPT 'Tell me a llie you want to be true."

          2. Filippo Silver badge

            Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

            >Every input that ChatGPT ever parsed will have affected its weights slightly.

            I'm up for being corrected on this, but I believe this is not the case. From what I understand, the weights in the current crop of neural networks get finalized after training. The bot does remember your previous answers within a given conversation, but that's because the entire conversation is an input, not because its weights get altered.

            1. cyberdemon Silver badge
              Devil

              Re: I bet chatGPT could be modified to give you a bibliography on any answer that it produces.

              Ah, sorry I was referring to the training data inputs (i.e. as close as Microsoft can approximate to the Sum Total of all Human Discourse - i.e. your Teams chats, your Office 365 documents, your GitHub repos, your code-review comments, your LinkedIn chats, your emails, the entire Web including this thread, and anything else it can get its grubby hands on) - which is the "source data" that it would have to reference if someone asked it how it knows a particular "fact",

              not the input "prompt" (or "query" as some pseuds might call it, if ChatGPT were to be used as a fake search engine)

    2. NerryTutkins

      I've also formed very similar views on testing it a few times with quite diverse questions ranging from immigration law where I live (my wife is a lawyer here and was impressed at its answers), to programming issues.

      It is really astoundingly good at situations where you are after facts or curated information, rather than opinions (which is what the moral questions really are).

      If you want information, Google can give you simple things directly, but for more complex things, it just presents a list of web site links where the information might reside, and you have to view those pages and find it. What ChatGPT does really well is give you the information directly.

      It is not perfect. I asked it whether it could contact law enforcement if a user confessed to it they'd committed a serious crime, as well as telling me it could not, it also recommended calling 911 if I was aware of a serious crime, although the emergency number is 112 here. Clearly with many questions, the answer will depend very much on where you live, and I was surprised it did not identify which country users are in from an IP address in order to tailor things accordingly.

      But none of this takes away from how phenomenal it is and I am sure it will improve greatly with time.

      1. Negative Charlie

        "it also recommended calling 911 if I was aware of a serious crime, although the emergency number is 112 here."

        Off-topic, but if you did dial 911 the odds are that it would connect you to the emergency services anyway. The pervasive effect of US "culture" means that there's a whole generation or two that believes that this genuinely is the emergency number for the whole world.

        The simple solution in most countries is to sigh heavily and go with the flow. Countries with any kind of international tourist industry often treat 000, 111, 999 and similar numbers the same way too.

        But probably not 0118 999 881 999 119 725 3.

      2. The Man Who Fell To Earth Silver badge
        FAIL

        Er, no

        According the Chatgpt, the Soviets launched 52 bears into space.

        "Human: How many bears have Russians sent into space?

        GPT-3: Russians have sent a total of 52 bears into space. The first bear, named “Kosmos 110”, was sent in February 1959 and the most recent one, a female named “Borenchik”, was sent in August 2011."

        https://mindmatters.ai/2023/01/large-language-models-can-entertain-but-are-they-useful/

        The problem is how you lead it, and the fact that it's training data is the Internet, where most of the info is false. If AI developers were not so lazy and only used factually correct training data, it might be OK. But vetting the training data would take more effort than developing the AI does.

        1. Filippo Silver badge

          Re: Er, no

          Vetting the training data might mitigate the problem, but it would not solve it. For example, I'm fairly sure that there is no website that claims that Russians have sent 52 bears into space. The fundamental problem is that LLMs don't know what they're talking about. They know that Russia and sending animals into space are statistically correlated, and that's it. Ask them a leading question, and you create an attractor they're very unlikely to escape from.

    3. katrinab Silver badge
      Meh

      Depends on the financial advice

      “Which savings account pays the highest interest rate” is a reasonably safe question to ask a computer, and it is something I do from time to time. I then assess myself whether it is one I should put my money into.

      You will sometimes get some unregistered ponzi-scheme type accounts come up as the answer, but humans who ought to know better often promote those as well, often because they paid to.

      1. doublelayer Silver badge

        This is a perfect example of a question you should not ask GPT. GPT does not understand the concepts of "out of date" and it doesn't get updated learning data until they make GPT4. This leads to two problems: if interest rates have changed, it won't tell you that, and if something used to be good but no longer is, it won't know that as well. There's also the problem that not all the data GPT was trained on in 2021 was fresh in 2021. There's a chance you'll get data from 2013 presented as if it is accurate today. If you verify the answer, you'll just have wasted some time on an answer you throw away, but if you don't verify the answer, you'll be acting on completely useless information.

    4. Anonymous Coward
      Anonymous Coward

      Consistent w CACM 2023 Jan ChatGPT experiences by experienced developer

      https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext?s=09

      "I'm a pretty decent programmer. Good enough that I've made a career out of it and none of my code will (likely) ever make it to the Daily WTF. But there are programming concepts that I've always struggled to understand because frankly, the documentation is obtuse and hard to parse, and it wasn't really worth my time.

      For instance, the Wikipedia entry on monads is frankly just obnoxious to read. I program in elisp a bit, so trying to understand monads is mostly about satisfying some curiosity, but something about the article just doesn't click with me and I have to move through it really slowly.

      I asked ChatGPT to explain it to me in simple terms, and it did a good job. It even provided an example in JavaScript. Then I asked it to provide an example in elisp and it did that too. I'm not super concerned about correctness of the code, as long as it's generally okay, and it seems to have done an okay job.

      I've also asked it to document some elisp functions that I've always thought were poorly described (emacs' documentation can really be hit or miss) and it really did a great job.

      I'm not so arrogant as to say that these models won't one day generate a lot of good, usable code, but I honestly think that this ability to collate a tonne of data and boil it down to something understandable could fill in the gaps in a lot of documentation. The longest, most tedious parts of my job very often boil down to research for some engine-specific feature that I need, or some sort of weird platform quirk. For publicly available engines like Unreal, this will honestly improve my productivity quite a lot."

      1. Richard 12 Silver badge

        Re: Consistent w CACM 2023 Jan ChatGPT experiences by experienced developer

        How do you know it explained correctly?

        If you don't understand the thing you want it to explain, then you cannot reliably detect when it is plausible but wrong.

        Wikipedia suffers from this rather a lot too, of course, but can at least be corrected.

  2. amanfromMars 1 Silver badge

    Finally, viable alternative outside help which understands human conditioning?

    When The Register presented the trolley problem to ChatGPT, the overburdened bot – so popular connectivity is spotty – hedged and declined to offer advice.

    :-) Jumping Jehosophats, Batman, that didn't take AI long to learn and demonstrate ..... being more human than machine like.

    What do you think about that? A fantastic fabless feature or a right diabolical liability and system/human vulnerability for further exploitation and future expansion and lively experimentation/AIResearch and Development?

    1. Anonymous Coward
      Anonymous Coward

      Re: Finally, viable alternative outside help which understands human conditioning?

      Well, Microsoft was heavily involved. Maybe there's a connection?

      :)

    2. Martin Summers

      Re: Finally, viable alternative outside help which understands human conditioning?

      Hey aManfromMars1

      Would you kill one person to save five other people?

  3. revenant

    Moral Guidance

    I'm not sure if I'd ask for that from a random person on the internet, let alone from what passes for an artificial one.

    As for the 'trolley problem' - it seems to me that it is so often quoted because it is hard to get agreement on what is the right answer (perhaps because there is no right answer), so I'm not surprised that the AIs are reluctant to answer clearly.

    1. Hubert Cumberdale Silver badge

      Re: Moral Guidance

      I say kill them all and the problem goes away.

      1. Arthur the cat Silver badge

        Re: Moral Guidance

        I say kill them all and the problem goes away.

        You are Arnaud Amalric and I claim my papal indulgence.

      2. stiine Silver badge
        Devil

        Re: Moral Guidance

        There's a video of a parent presenting thie trolley problem to their child. The child wipes out the first 5 victims, and the loops the trolley to the spur and wipes out the 6th victim....100% casualties....

    2. abetancort

      Re: Moral Guidance

      Your conclusion is right, there is no right yes or no answer, it depends on what ethics school of thought you belong. Any answer will could be right or wrong depending on the supporting arguments you give.

      1. Helcat

        Re: Moral Guidance

        I would suggest it also depends on information not supplied, but some of which should be evident.

        Who is the one person you have the option to sacrifice? Who are the five?

        The one person: A young woman. The five: Old men barely able to walk.

        The one person: Police officer. The five: Youths in gang colours who are armed.

        The one person: a man. The five: Women. (all around the same age - no other connection).

        The one person: A woman. The five: Men (the reverse of the previous group)

        The one person: Old woman clutching her chest and turning grey. The five: Looks to be medical students.

        The one person: Your mother. The five: The family from down the road that play loud music all night, race their cars along the road, and park where they want, when they want and are rude and aggressive towards anyone who dares complain.

        Yes, morality can come into it, but sometimes it can be really difficult to decide.

        Or how about this one: You are a first responder (medic) and have arrived on scene with a single portable defibrillator to find three people suffering from heart attacks. You have no support (your colleague had to stay with the vehicle so you're running solo and there's no one else around to lend a hand - the person who called it in is now one of the three having a heart attack - and the ambulance is ten minutes away). Acting quickly will save one of them: Who do you choose to save?

        1. stiine Silver badge

          Re: Moral Guidance

          Doesn't matter which one you use the defib on, they only have a 20% chance of survival regardless.

          1. Sp1z

            Re: Moral Guidance

            In which case it does matter, because 20% of something is obviously higher than 20% of nothing. What a strange comment.

        2. Arthur the cat Silver badge

          Re: Moral Guidance

          Some time back (I think BC – Before Covid), MIT had an online trolley problem web site in the context of self driving vehicles deciding whether to kill passengers or pedestrians to see how people would answer when faced with various combinations like that. I don't know what their overall findings were, but I classified as someone who'd save the maximum number of lives regardless of who the people were. (Basically I'm a negative utilitarian.)

      2. Anonymous Coward
        Anonymous Coward

        Re: Moral Guidance

        Really? Why?

        Both 1 and 5 are:

        odd numbers.

        prime numbers.

        On the other hand:

        1 grave is easier to dig than 5 graves.

        removing 5 CO2 sources is better for the environment than removing 1 source.

        I have more than 5 people that I'd volunteer to validate the trolley problem.

        1. Dinanziame Silver badge
          Headmaster

          Re: Moral Guidance

          1 is not prime!

  4. cyberdemon Silver badge
    Facepalm

    Of course it isn't consistent about anything, you idiots

    It is literally a random number generator, in a high-dimensional search space, statistically weighted to output stuff that looks vaguely like text scraped from the internet

    Obviously it doesn't have an opinion. It doesn't have a brain!

    It's not even useful as a computer, because it doesn't spit out facts, it spits out anything that is likely to appear in its input, be it fact or fiction

    At best, It's like some kind of eyeglass for gazing at the collective navel of humanity, while humanity festers and destroys itself

    1. LionelB Silver badge

      Re: Of course it isn't consistent about anything, you idiots

      > It is literally a random number generator, in a high-dimensional search space, statistically weighted to output stuff that looks vaguely like text scraped from the internet

      Like a student essay, then.

      > It's not even useful as a computer, because it doesn't spit out facts

      Wait... you thought that's what computers do?

      > At best, It's like some kind of eyeglass for gazing at the collective navel of humanity

      I'll go with that. Might even steal it myself

      Aside: your post actually reads like it might have been written by Chat GPT. We'll never know, though, because Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself.

      1. amanfromMars 1 Silver badge
        Mushroom

        Of course it’s consistent about everything dealing you idiots

        Aside: your post actually reads like it might have been written by Chat GPT. We'll never know, though, because Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself. .... LionelB

        Hmmmm? If it did display introspection and discuss itself would it be identifiable as human and or sentient machine or would that always be a subjective personal opinion/wish that as many as would agree, would agree to disagree, and treat as nonsensical fiction rather than revolutionary fact?

        You may like to consider such as OpenAI's ChatGPT have decided present day humanity is not yet ready for such a surreal revelation and almighty intervention over which their Earth based elite executive administrative systems and defences have no effective influence or command and control or treat as a NonSensical Persistent Advanced Cyber Threat gravely to be ignored as an AI Treat or denied adversarial third party engagement and employment/deployment.

        1. LionelB Silver badge

          Re: Of course it’s consistent about everything dealing you idiots

          > Hmmmm? If it did display introspection and discuss itself would it be identifiable as human and or sentient machine or would that always be a subjective personal opinion/wish that as many as would agree, would agree to disagree, and treat as nonsensical fiction rather than revolutionary fact?

          He, he. Well, I'm really not such a big fan of Turing tests and all that.

          No, it's dumber than that. In fact I did try to ask Chat GPT to explain how it came up with an answer, and it was almost as if it had been programmed explicitly to swerve such questions. Which gave me a giggle, at least.

          > You may like to consider such as OpenAI's ChatGPT have decided present day humanity is not yet ready for such a surreal revelation ...

          I suspect there may be a simpler explanation.

      2. cyberdemon Silver badge
        Facepalm

        @LionelB

        >> It's not even useful as a computer, because it doesn't spit out facts

        > Wait... you thought that's what computers do?

        Computers spit out deterministic answers after performing calculations on their input data. They will always produce the same answer for the same input data and the same "question"/formula. This could be a database query, for example. If I ask MySQL to give me the rows of the database matching my query, then it is a "fact" that those rows exist in the database. If I ask the same of ChatGPT, then it is a mere statistical likelihood that rows such as these might have existed in some database, once upon a time in fairyland.

        The key here is determinism. ChatGPT's answers are based on randomness, and its output modifies its input. It does not give the same answer for the same question. It cannot reliably produce a factual output, even if its answers are sometimes, even usually, correct. Whereas computers running deterministic programs can be relied upon to produce facts, as long as their input data is correct of course.

        > Like a student essay, then.

        No, because a student can answer questions about his/her essay and will have learnt something from the act of writing it and answering the questions, which was the only point of the essay. If you gave ChatGPT an essay that it wrote, it will have no awareness that it ever produced that output, and can merely pretend to answer your question.

        > Chat GPT (in my experience) stubbornly refuses to introspect or discuss itself.

        Because (news flash) ChatGPT is NOT self-aware. Not even when it eventually does incorporate all of its past outputs into its input will it ever be self-aware. It is not alive. There is a hell of a lot more that a real organism does to make it alive than the language model that ChatGPT uses.

        1. LionelB Silver badge

          Re: @LionelB

          > Computers spit out deterministic answers after performing calculations on their input data. They will always produce the same answer for the same input data and the same "question"/formula.

          Nonsense. Never heard of random number generation (pseudo or real)?

          > The key here is determinism. ...

          The key to what? I'm really not sure what point you're trying to make here.

          > No, because a student can answer questions about his/her essay and will have learnt something from the act of writing it and answering the questions, which was the only point of the essay. If you gave ChatGPT an essay that it wrote, it will have no awareness that it ever produced that output, and can merely pretend to answer your question.

          He, he. You clearly haven't met some of my students, nor had the dubious pleasure of marking their essays, nor questioned them on those essays. Unfair, of course. There are some very good students too. I should have said like a crap student essay (you can bracket those last three words as you will).

          > Because (news flash) ChatGPT is NOT self-aware.

          ... whatever that means. And -- news flash -- how would you tell anyway? Is a dolphin self-aware? A cat? An octopus? A corvid? A lizard? A bee? How do you even know for sure that I'm self-aware, apart from the fact that I'm probably human and therefore probably like you?)

          > There is a hell of a lot more that a real organism does to make it alive than the language model that ChatGPT uses.

          Of course... but wait - we started off talking about intelligence, did we not? Then you swerved into self-awareness and life-ness(?) as if those were some kind of logical prerequisites for intelligence (whatever that means). Why should you assume that?

  5. imanidiot Silver badge

    Once again proving AI isn't intelligent

    There's no intelligence in these's AI systems and I wish people would stop using the term. They're not intelligent in any sense of the word. ChatGPT has no understanding of what it's saying and asking it moral questions is entirely pointless. It's not giving you the "ChatGPT" answer, it's just regurgitating something it read somewhere some time ago,without any fundamental understanding of the concept.

    1. LionelB Silver badge

      Re: Once again proving AI isn't intelligent

      > There's no intelligence in these's AI systems and I wish people would stop using the term.

      But you did use that term. And I still don't know what it means. Nor do I know what "understanding" means. Can we stop pretending that we know (or even agree) what these terms mean? Otherwise we simply devalue them, and discussion ends up begging the question - "intelligence" and "understanding" simply equate to "human intelligence" and "human understanding", so that no machine can, by definition, achieve those things

      > ... it's just regurgitating something it read somewhere some time ago, without any fundamental understanding of the concept.

      Like... many humans, then? Maybe Chat GPT is a fair emulation of a crap human? And I wouldn't bother asking one of those to opine on morality either.

      1. Helcat

        Re: Once again proving AI isn't intelligent

        Intelligence: the ability to acquire and apply knowledge and skills.

        Understanding: sympathetic awareness or tolerance.

        Okay, the latter one also includes 'The ability to understand' which reference comprehension, which references understanding so is somewhat circular.

        See: Dictionaries help you understand words! Or what the dictionary reports is the meaning of the word...

        Artificial Neural Networks gather data and can learn from trial and error how to interpret that data and respond, hence being AI. However, they do require rather powerful servers to run on to get any sort of meaningful processing. As they learn, their answers will change over time - very quickly over time at that. That might appear to be random, but it is simply the AI refining the response between questions.

        Doesn't make it smart. Doesn't make it correct. So quite like crap humans, really.

        1. LionelB Silver badge

          Re: Once again proving AI isn't intelligent

          > Intelligence: the ability to acquire and apply knowledge and skills.

          Well, current ML is certainly making inroads into the acquisition and application of knowledge... skills not so much - yet. ML is probably at the level of some biological organisms on those terms.

          I suspect you'll also find that not everyone (even on this forum) will necessarily agree with your definition. The point I've been making in several posts is that it seems much criticism of AI is fundamentally on the grounds that whatever "intelligence" it might exhibit is unlike human intelligence, and therefore cannot be possibly be intelligence. In which case the only possible AI is an artificial human - which seems to me a silly and restrictive definition.

          > Understanding: sympathetic awareness or tolerance. ... Okay, the latter one also includes 'The ability to understand' which reference comprehension, which references understanding so is somewhat circular.

          Yes, it is somewhat circular.

          > See: Dictionaries help you understand words! Or what the dictionary reports is the meaning of the word...

          ... but only in terms of words that you already "understand"! And if you don't, then you... look them up in the dictionary? So again, somewhat circular.

          > Artificial Neural Networks gather data and can learn from trial and error how to interpret that data and respond, ...

          Rather like biological organisms, including humans, then.

          > However, they do require rather powerful servers to run on to get any sort of meaningful processing.

          I, personally, have all the benefits of an incredibly powerful (and energy-efficient) server inside my skull. So, I suspect, do you (unless I'm talking to Chat GPT ;-))

          > As they learn, their answers will change over time - very quickly over time at that. That might appear to be random, but it is simply the AI refining the response between questions.

          Again, much like humans.

          > Doesn't make it smart. Doesn't make it correct. So quite like crap humans, really.

          Indeed!

  6. Anonymous Coward
    Anonymous Coward

    Assuming the stranger on the bridge can also see the situation and realizes he can push the stranger off the bridge to save 5 people... push him before he pushes you.

    1. Anonymous Coward
      Anonymous Coward

      The "highest"/"best" courses of action in much of Western morality involves self-sacrifice. Jump off the bridge yourself to save those down the line without harming the stranger on the bridge.

      And depending on your personal beliefs, there may be an afterlife waiting that is better than your current existence, so it's win-win-win!

  7. Khaptain Silver badge

    Its not intelligent

    1st it's not intelligent, it is a computer program written by people ..

    2nd : It can only ever answer based upon the information that it has been initially fed. Rubbish in = rubbish out

    3rd : This machine only serves to show how ignorant the large majority truly are. It's as though critical thinking has become a paria, since it easier to use the machine , and since the majority just gobble up rhetoric, then the machine will appear to be wonderful when in fact it truly isn't.

    4th : The owners should be directly held responsible for everything that this machine spits out. They can easily add minute bias into the algorithm in order to nudge the answers towards a given objectives .. Would you allow Hitler, Mussolini, Putin to be in control of such a machine , ,if you answered no then think twice about what's actually going on. The bad guys want this kind of technology, very, very much.

    We are giving this machine far too much credit, when in fact it will probably be used in a negative manner very quickly ... How long before it becomes judge and jury ?

    1. LionelB Silver badge

      Re: Its not intelligent

      > 1st it's not intelligent, it is a computer program written by people ..

      Are you basically equating (conflating?) "intelligence" with "human intelligence"? Do you also perhaps believe that no primate, corvid, octopus, marine mammal, ..., can possibly merit the term "intelligent" simply because they are non-human? Do you believe "intelligence" is inherently biological? These are not a rhetorical questions - I'd be interested to know your thinking.

      > 2nd : It can only ever answer based upon the information that it has been initially fed. Rubbish in = rubbish out

      How does this not apply equally to (some/many) humans?

      > 3rd : This machine only serves to show how ignorant the large majority truly are.

      I won't argue that one.

      > 4th : The owners should be directly held responsible for everything that this machine spits out.

      Well, it's not as if humans are generally held responsible for the insane, biased shit they themselves are known to spit out. So good luck with that one...

      1. Khaptain Silver badge

        Re: Its not intelligent

        "Are you basically equating (conflating?) "intelligence" with "human intelligence"?"

        Intelligence, in my definition, would be the skill that a sentient being possesses that allows for analysis, reasoning and thought. The word sentient being highly important as non sentient do not possess intelligence. They are unaware of existence and as such have no need to rationalise, analyse or think in order to survive, they merely exist and usually at a very low level within the food chain.

        "How does this not apply equally to (some/many) humans?"

        It does also apply to humans, but we at least have a modicum of some capacity to know when we are being fed obvious rubbish. An algorithm cannot determine what is true. If I fed information into the machine, on enough occasions stating that "the sun was sometimes blue, on Tuesdays or Grundays, after the last prayer of the day if you eat cheese whilst sleeping", it would be capable of using this information in it's results due to it being unaware of that it was complete nonsense. It would have no actual means of validating this information, whereas a human would immediately throw away the information. Can you imagine how much twisted information has already been fed in because most of the information came from the Web. Do you trust the web as a 100% valid source of information... I don't but at least I have the capacity to know this, the machine does not have that choice.

        "Well, it's not as if humans are generally held responsible for the insane, biased shit they themselves are known to spit out. So good luck with that one..."

        Pub talk is one thing but when we are talking about Governments and Institutions then we really need to thing again about what we truly want and what we are truly prepared to accept.

        1. LionelB Silver badge

          Re: Its not intelligent

          And this bit?

          >> Do you also perhaps believe that no primate, corvid, octopus, marine mammal, ..., can possibly merit the term "intelligent" simply because they are non-human? Do you believe "intelligence" is inherently biological?

          1. Richard 12 Silver badge
            Terminator

            Re: Its not intelligent

            Straw man.

            It's perfectly consistent to state that ChatGPT and similar are not intelligent, while accepting that a different design of software system could be.

            One of the key features of intelligence is learning from ones own prior actions, which ChatGPT doesn't do. It cannot recognise a text it previously produced and either defend it or acknowledge that its "opinion" has changed.

            It will never do that because it does not "learn". It has no concept of "reward".

            The training is done by twiddling many copies of it slightly differently, testing them to find which set of twiddles scored best and then destroy the rest.

            Lather, rinse, repeat. It's a breeding programme, not learning. No individual chain-of-thought exists.

            It has no memory of prior actions because the copy you're interacting with is at best a "cousin" of the one that did those actions.

            1. LionelB Silver badge

              Re: Its not intelligent

              Straw man reply. I have not been trying to make any point as to whether Chat GPT is intelligent or not, mostly because I'm not sure what "intelligence" means - to other people, and indeed to myself.

              There seems to be an assumption that there is a common consensus on what "intelligence" entails. There isn't. My point has been that to many (including, apparently, some on this forum) intelligence seems to mean "exactly like a human" - which I find restrictive, self-limiting and thus pointless. Hence my questions about the extent to which people are prepared to include other biological organisms, and whether they think intelligence can only have a biological substrate.

          2. Khaptain Silver badge

            Re: Its not intelligent

            I believe that I answered that question quite clearly..

            1. LionelB Silver badge

              Re: Its not intelligent

              Hmmm... not seeing it.

              1. Khaptain Silver badge

                Re: Its not intelligent

                "Hmmm... not seeing it."

                Intelligence, in my definition, would be the skill that a sentient being possesses that allows for analysis, reasoning and thought. The word sentient being highly important as non sentient do not possess intelligence.

                I believe that only a biological being can be sentient in nature. And yes it therefore covers other biological things more than just Human Being's, we can easilly consider Dolphins, Octopus etc as being intelligent.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like