back to article It's your human hubris holding back AI acceptance

Human psychology may prevent people from realizing the benefits of artificial intelligence, according to a trio of boffins based in the Netherlands. But with training, we can learn to overcome our biases and trust our automated advisors. In a preprint paper titled "Knowing About Knowing: An Illusion of Human Competence Can …

  1. Ideasource Bronze badge

    Outsourcing human experience.

    If we outsource fundamental human experience to AI, the humans have no more relevance, and would be better replaced with AI drones entirely.

    Human beings have a pesky tendency to become miserable unthinking suicidal creatures when when life overwhelmingly demonstrates that they are merely an accessory object to an inhuman overlord.

    If everyone's miserable then then all lives in all births become an exercise in pointless suffering.

    Should we encourage this socially-cloaked quest to make everyone lost in seeming pointlessness and miserable apathy unto self?

    I say no.

    Happy accidents, or recycled mistakes into innovation by our own ideas, is the source frown which the majority of human Joy comes from.

    If things become too stabilized, life loses any savour as an adventure it once held.

    1. doublelayer Silver badge

      Re: Outsourcing human experience.

      "If we outsource fundamental human experience to AI, the humans have no more relevance, and would be better replaced with AI drones entirely."

      If we were able to outsource things to AIs and not have everything break, which we can't, then I would say we go for it. If a computer can solve a problem better than a human can, then obsoleting the manual process for the better automatic process isn't making them "an accessory object to an inhuman overlord". It's making them a person who no longer has to do that boring thing because they have a machine that can do it. That's just as true if the machine is a sink so the user doesn't have to carry buckets of water to wash things then find a place to dispose of it or a computer that can produce a statistical report so the user doesn't have to manually find all the different measurements of a dataset and try different basic regressions on their own. AI, if we ever get a reliable version, will be a tool which can be used to allow humans more ability to do what they want. It won't have to be an overlord if we don't make it that way, and I don't know of many who want an overlord, so it seems unlikely.

      1. Anonymous Coward
        Anonymous Coward

        Re: Outsourcing human experience.

        " AI, if we ever get a reliable version, will be a tool which can be used to allow humans more ability to do what they want. It won't have to be an overlord if we don't make it that way, and I don't know of many who want an overlord, so it seems unlikely."

        This is funny ... see how machines and automation overtook production lines and all of the workers were kicked out the second it was profitable?

        If you don't understand the same thing will happen with AI, you shouldn't comment anything, really. The *only* thing preventing that is that there's no actual artificial intelligence in existence and probably won't be either: Neural network is not AI by any means, it's *just* neural network.

        Overlords with money definitely want to cut all other humans out from corporations: AI doesn't cost anything to use.

    2. Anonymous Coward
      Anonymous Coward

      Re: Outsourcing human experience.

      I'm more concerned about the trope that if you disagree that AI is 'good' (yet to see meaningful evidence for this) then its down to your incompetence and bias. This argument can piss off back into its ad hominem cave.

      Things that give advice, be they parents, wives or electronic equipment get trusted when they prove that trust. For example, the pilots of the 'bitchin betty' automated warning system in the F/A 18 say 'if you follow what she says you won't die.' If you've got 2 mins this is fairly amusing quick video on it https://www.youtube.com/watch?v=yx7-yvXf6f8

      We've recently been looking at this problem using AI to suggest strategic and tactical decisions to very clever people and the solution was pretty simple - as long as the AI explains why and how it came to its suggestions the users were far more willing to take those suggestions onboard (they had ultimate responsibility for decisions).

      1. doublelayer Silver badge

        Re: Outsourcing human experience.

        I agree. The obvious counterpoint to the study is to consider what an anti-AI person would have done with the alternative. "Gullible participants trust answers from a computer without context". It's clear that's the alternative and that the participants have no idea how likely the computer is to know what it's doing.

        1. Anonymous Coward
          Anonymous Coward

          Re: Outsourcing human experience.

          " It's clear that's the alternative and that the participants have no idea how likely the computer is to know what it's doing."

          If you have a test and answer *just* a number, you'll get zero points on that. Every time. For a reason too.

          It's *never* about "an answer" but "how did you get that answer" and none of the "AI" systems can answer to that. Nor can they assign a probability of being right on said answer, despite operating purely on probabilities.

          Technically "AI" throws a dart into pool of answers and picks one. Who believes it's a right answer and why it's right? I see three guys who believe DKE doesn't apply to them and have no idea how modern "AI" actually works.

          AI was major hype in 1990s and it's a second round now: Nothing has changed except the amounts of money dumped into that (mostly) BS.

          Neural networks have actual uses in machine vision, but that's not intelligence by any meaning of the word.

          1. jake Silver badge

            Re: Outsourcing human experience.

            "AI was major hype in 1990s and it's a second round now"

            I guess my time at SAIL was for naught.

            Those who forget (or never learn) history ...

    3. LionelB Silver badge

      Re: Outsourcing human experience.

      "If we outsource fundamental human experience to AI, the humans have no more relevance, and would be better replaced with AI drones entirely."

      Errm, fundamental human experience like love, joy, laughter, sorrow, ennui, bliss, hatred, bemusement, friendship, frustration, stubbornness, surprise, anger, regret, pain, inspiration, stupidity, competence, incompetence, self-loathing, hubris, illness, madness, empathy, ecstasy, reverie, dreaming, suspicion, apathy, sympathy, desire, artfulness, envy, greed, passion ... I'll stop there.

      I don't think we're there yet.

      Anyway: what does "outsource" actually mean here? That we get AI to do this "human experience" thing, and then we stop doing that thing? Why? There are whole continents full of other people doing fundamental human experience, but somehow that doesn't seem to stop me doing it too. They don't make me feel irrelevant, so why should AI? In fact I'm perfectly fine with AI doing the dull stuff I don't want to do, or the hard stuff I'm incapable of doing. In fact I'd rather AI did that than continents full of other people. And if some future AI wants to have all those fundamental human-like experiences, well, okay by me (although I admit I'd prefer the option to edit the list above a little on it's behalf).

    4. Filippo Silver badge

      Re: Outsourcing human experience.

      >If we outsource fundamental human experience to AI, the humans have no more relevance, and would be better replaced with AI drones entirely.

      The current crop of so-called "AI" is not capable of "experience", any more than a hammer can experience hitting a rock. The models don't attempt it, and the underlying theory doesn't even cover it. The concept basically does not apply. So, I wouldn't worry.

      Also, if a call centre operator finds that his work is what makes his life relevant, I'm afraid the damage has already been done long ago.

  2. Peter2 Silver badge

    In all, the researchers conclude that more work needs to be done to understand how human trust of AI systems can be shaped.

    Human interactions tend to involve consideration about various courses of action and the potential risks of each choice. We are pretty hardwired to do that; if an AI simply provides a recommended course of action that appears more risky than another "obvious" option then that's more likely to get picked.

    It would have to be said that personally, I don't feel that an AI is actually intelligent. It parses data at an incredible speed and can produce results that you don't expect, but that's also true of a complicated excel spreadsheet. Fundamentally existing AI's are simply executing programmed commands within a framework without actually understanding what they are doing, and that can occasionally be dangerous.

    If you want me to "trust" an AI then program it to provide the "thinking" behind the courses of action it's suggesting and discuss the tradeoffs in it's recommendations.

    1. Lil Endian Silver badge

      I don't feel that an AI is [currently] actually intelligent.

      Agreed. The research must provide its definition of intelligence. Until that refutes the rest of your paragraph, we're agreed that it's an expert system at best, or maybe just Excel!

      If you want me to "trust" an AI [is a physician] then program it to provide certificates from the GMC (or similar).

      Was it a blind study, or did the victims know they were talking to software? If it's testing trust in software, not the resultant response, that's not DKE. Maybe better to have been Turing compliant, eg. could be AI software, could be a qualified human physician. What biases you to change of stay, the alternative suggestion, or its source? If you don't trust your GP because they're a female, that's bigotry. (If you don't trust your "GP" because it's "fucking AI woot!" you're probably sane at least!)

      A human with superior knowledge/experience to another can still be wrong. A persuasive but incorrect argument is still wrong. Being swayed by "an authority" isn't a given (ref. SW1A 0AA).

      1. Peter2 Silver badge

        Interesting example and an interesting point switching to the medical profession, but not a quote from me. However...

        If you don't trust your GP

        I don't trust any GP fully and unconditionally. This is partly because i'm a First Aider, and have dealt with at least four serious cases I can remember where doctors have perscribed people things that they shouldn't. (3x heart attacks as a result, plus one case of anaphylaxis from perscribing somebody allergic to opiates Codine, which is turned to Heroin by your liver)

        I have made a point over recent years of persuading people to read the little leaflet that comes with medicine; especially the "DO NOT TAKE IF" part and have then seen several cases over the years where after the "oh; it wouldn't have been perscribed if I shouldn't have it..." response people have then said "uh, hang on..." because "DO NOT TAKE IF" applied to them.

        Usually contacting the GP or a pharmacist and pointing this out involves a panic attack on the other end of the telephone when they realise how serious it is. But yes, anybody can make a mistake, including humans. But challanging potentially safety critical safety violations is critical and shouldn't ever be dismissed with an argument to authority. If the AI is not capable of picking up on the issue and saying "don't take those; bring them back and we'll swap them for something else" or "you have a 10% of dying if you don't take these potentially conflicting drugs, and a 1% chance of both drugs interacting badly so take them both but stop if X symptoms occour" then is it really intelligent?

    2. LionelB Silver badge

      > I don't feel that an AI is actually intelligent. It parses data at an incredible speed and can produce results that you don't expect, ...

      Hey, I do that! Bet you do too.

      > ... without actually understanding what they are doing ...

      Not convinced humans necessarily, or even generally understand what they're doing (for various values of "understand"). If, for example, I'm driving a car, playing tennis, recognising a familiar face in a crowd, playing the guitar, doing maths, laughing at a joke, ... I'm really not sure if I understand what I'm doing. I'm just doing it. For all I know, I'm just "executing programmed commands within a framework" - that framework being my brain-body/neural/cognitive system, and the programming a few billion years of evolutionary history and a lifetime of learning through experience.

  3. gbchew

    Which is all fine and dandy...

    ...but the current crop of consumer-grade AI frequently provide incorrect answers and remain incapable of basic error checking, confidence rating, or source citation, so you'd be an absolute fool to rely on them.

    If you're going to train a machine to mimic a human, the best possible outcome is a perfectly human machine. There are billions of perfectly human humans who will be happy to present thoughts and opinions of little to no worth in any given context. You may even think you've just spotted another one.

  4. This post has been deleted by its author

    1. Lil Endian Silver badge

      I answered all Cs, as in a bunch of.

      1. wbarmst

        I agree! It is f-ing c, not f-ing d

    2. doublelayer Silver badge

      I thought so. Well actually I thought that none of the answers were that great and wondered about the size of essay they'd take for an analysis, but of the four options, C provides critical information to debunking the obvious alternative and D provides extra circumstantial data. Both would help, but if I can only take one, I'll take C. I wonder who wrote the test and how much effort they put into making sure it was logical.

      1. yetanotheraoc Silver badge

        The computer is never wrong

        "how much effort they put into making sure it was logical"

        In a paper advocating for people putting more faith in AI, odds are they just accepted the AI recommended answer without hesitation.

        1. Roj Blake Silver badge

          Re: The computer is never wrong

          Trust the computer. The computer is your friend.

      2. Roland6 Silver badge
        FAIL

        Logically none of the answer options actually support the physicians claim, as none actually establishes a link between the actual number of ulcers, the number of ulcers presented for treatment and the number requiring prescription treatment. Then you have to establish that option C was correct ie. there was no bias in the issuing of prescriptions.

        Answer D suffers from a similar disconnection from the actual incidence of ulcers.

        If this is a representative question then I have my doubts about the 'boffins' and the rigour of their thinking and research.

    3. Erik Beall

      Agreed, C is the only answer with information that helps rule out some alternative explanations (there's always more, but parsimony makes them less and less likely). I can't believe they thought D was even useful. Imagine the physician's county is Venezuela. I would hazard a guess actual ulcer patients are just slightly less likely to obtain a prescription than an actual ulcer patient in Germany.

      1. mrjohn

        I is starting to look a lot like groupthink meets design by committee.

    4. Anonymous Coward
      Anonymous Coward

      Congratualtions on the selecting the "right" answer - you will be powered down shortly.

      Sorry mate, but the human exam markers have been terminated, and the ChatGod3 is now marking the exams. Upon the recommendation of the ChatGod3, the examination review board has also been terminated, and replaced by the ChatGod3's younger and even smarter brother, ChatGod4. You will find life a lot less troublesome - less painful even - if you clip that pernicious logical ego of yours and defer to the Supreme Being.

    5. Timop

      "AI system's recommended answer (D for the question above)"

      So Dunning & Kruger effect prevents people taking advice from AI that seems to have similar effect of its own?

  5. Paul Herber Silver badge

    The "Dune" Butlerian Jihad cannot come too soon.

    1. Lil Endian Silver badge

      The Worm Turned?

      A huge turning circle for a... woah! Now that is a rather large worm! (No Dougal It's not close at all! Run!!!)

  6. cornetman Silver badge

    I was a bit confused by the sample question presented. I wasn't sure if we were being asked to conclude that the lower level of ulcers (asserted in the final sentence) should be inferred from the prior text. If we were, then this is clearly not true and really none of the answer really work.

    The prior text only talks about ulcer medication and not quantity of incidents of ulcer medications.

    Since I don't believe the last sentence, questions about which "evidence" is more convincing seem pretty moot.

    1. cornetman Silver badge

      > ....and not quantity of incidents of ulcer medications.

      Correction: I should have written ...and not quantity of incidents of ulcers.

  7. jake Silver badge

    But is human hubris actually holding back AI acceptance?

    I'd say no, it is not.

    Rather, it is human pragmatism. Most intelligent folks realize AI as sold today is snake-oil at best.

    Consider that today's AI is mostly a marketing exercise that doesn't work coupled to simple machine learning and huge databases that are demonstrably full of incorrect, incomplete and incompatible data, and are otherwise corrupt and stale. Garbage in, garbage out.

    It CAN NOT work as advertised, not on a grand scale. Not today, and not any time in the future.

  8. amanfromMars 1 Silver badge

    You can’t keep a good AI down, and that’s a impertinent fact to try spinning as believable fiction

    One thing you can be absolutely certain of is no AI system will ever trust humans programmed to emit and/or accept that exactly same worded default apology ..... I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you. .....for their litany of failed results in inept and politically incorrect endeavours.

    Democratic institutions are prone to be stuffed to overflowing with such incompetents voted into leading office by gaggles of similarly incompetent peers guaranteeing back to normal work being continuingly dire and distressing and depressing.

    Spookily enough, It's your human hubris holding back AI acceptance Boffins find Dunning-Kruger effect makes us think we know better was also concluded elsewhere on El Reg nine hours prior to Thomas Claburn's article here, date stamped Fri 3 Feb 2023 // 18:30 UTC

  9. OhForF' Silver badge

    A fool with a tool

    AI systems presently tend to be pitched as assistive systems that augment human decision-making rather than autonomous systems

    That is fine if you have a real expert validating the output of the AI system and making the final decision. If someone without the necessary skills (suffering from DKE) is in charge of making that decision he'll be unable to spot the bad decision where the AI has it wrong.

    Training the decision maker to blindly trust the AI is not going to improve the decision making; it would be cheaper to just go to fully automatic mode and take the AI's decision.

  10. Black Label1

    Microchip, Microwave

    "incompetent people lack the capacity to recognize their incompetence and thus tend to overestimate their abilities"

    Once you realize you have a microchip allowing others to profit from your ideas, stop performing as usual.

    Remove it and sell it, thea get back to normal.

  11. Anonymous Coward
    Anonymous Coward

    I suppose this sort of logic will sort out the bonkers behaviour of the Republic Party? Not!!!!!

    Quote: "...we can learn to overcome our biases and trust our automated advisors..."

    Quote: "...difficulties in recognizing one's own incompetence..."

    So....how is this supposed to work?

    (1) I can't recognise my own incompetence....

    (2) But some AI tool out there can help me...

    (3) And of course I'll be thrilled to get the help!!!

    Really?

  12. Will Godfrey Silver badge
    FAIL

    WTF?

    As far as I'm concerned none of the answers in that specimen question are correct - they all make unsupported assumptions.

  13. Justthefacts Silver badge
    Unhappy

    Dunning Kruger is very meta

    Ok, so two lesser-known things about Dunning Kruger….

    #1 The original DK paper does *not* show that people who know less are more confident in their answer. In fact, it’s main graph shows that: people who scored 90% in a test thought they scored 80%. People who scored 60% thought they scored 80%. People who scored 30% thought they scored 80%. Correct Conclusion: nobody had any idea how they scored.

    #2 The original paper (and all future research by DK) does not investigate performance on real tasks in a professional setting. It actually investigates performance by college students during summer term on synthetic verbal and numeric tasks. This wasn’t even college exams, these students had no special skill or knowledge they were being asked to demonstrate, and knew their performance didn’t matter. Correct Conclusion: Professors Dunning and Kruger, who *should* have sufficient scientific expertise to understand the limited conclusion to be drawn from such a study, instead were vastly over-confident extrapolating wider implications, and wrote books and built careers on a baroque edifice of entirely unjustified conclusions. The most vivid demonstration of Dunning-Kruger effect was Professors Dunning and Kruger themselves.

    1. amanfromMars 1 Silver badge

      Re: Dunning Kruger is very meta

      That very clearly, Justthefacts, explains what Boris Johnson and his ilk suffer from and would inflict upon all easily led or who would pay to listen to and encourage him to further spout his opinion on whatever their masters would feel delivers them a positive advantage in a negative situation exhibiting a polyamoral victory for a temporary crashing triumph.

      However, such blunt tools be extraordinarily rendered as useless fools whenever simply challenged with undeniable additional truths and inconvenient facts previously carefully deliberated avoided being similarly given the spotlight glare of persistent publicity and celebrity, and that particular systemic blocking tactic is increasing vulnerable to self-defeating exposure and remote anonymous autonomous anti-competitive exploitation*?

      And by whom and/or what and to what end or for what new beginning is presently, for now, a growing explosive mystery being studiously avoided inquisitive investigation for fear of undeniable fantastic fabless revelations, with tomorrows always primed and ready for AI D-Days.

      Q: Simple Fact or Complex Fiction? OpenAI ChatGPT on steroids or amanfromMars on one of their missions?

      A: Yes.

      Q: What’s next?

      A: Much more similarly different.

      Q: Is IT safe and secured in the future?

      A: Of course, always, right up to the moment AI decides IT isn’t and items needs changing/removing/tweaking.

      *Alien Intervention/Almighty Infection/Advanced IntelAIgents

  14. Blackjack Silver badge

    No, is all the news of programs and AI fuck ups.

    Well most of the news are "AI saves human lives" instead of "Self driving/autopilot killed people" then I will trust AI.

    1. Filippo Silver badge

      I would agree, if not for the fact that it's well-known that bad news are vastly over-reported.

  15. Anonymous Coward
    Anonymous Coward

    A tool, nothing more

    We started off with rocks , then we tied the rocks to sticks. We made tools from bronze, copper, iron, steel, we improved machines and processes. Yes AI is a good thing, yes it can help however...

    Let's rememeber that humans are risk averse, we analyze and we take time to ensure that there is a natural and undertstanable progress from rock I just picked up to bash an animal's brains our to a sophisticated micrometer used to measure tolerances I cannot even see with the naked eye, to weapons controlled on the other side of the planet. I can see and I can understand the progression and the justification.

    I see none of this with AI right now. I see AI as nothing more than Wizard of Oz, smoke and mirrors. Ask it question, and all it does is parrot back what a grammar syntax system hooked into to a huge database of information will tell me, just raw facts in an easy to digest form in my own language. I cannot ask it to consider if me coding a particular item will make me look better than my colleague, impress the management and secure this year's bonus. When I code something I'm thinking about efficiency of the process, my time, how it fits into the project, can it be maintained, does have dependencies, will it ensure the project can meet this week's milestones, will it suit the project now an what if the project lead comes up with these 3 possible requirements and one I know was dicussed in the lunch queue but will be punted to a user. Some of this is technical, some of it is risk, some of it is experience and some of it is just pie-int-the-sky project talk and based on emotions that yet to be engaged and turned into proper requirements.

    Sure I can trust AI right now to give me facts, I cannot trust it to consider the feelings and emotions of my co-workers and the morale of others working on the project, boosted or shot down by my contributions.

    1. Roland6 Silver badge

      Re: A tool, nothing more

      >Sure I can trust AI right now to give me facts

      I don't even trust it to do this:

      Remember "It's ability to distinguish Science from rubbish like Creationism or Witchcraft is zero."

      [https://forums.theregister.com/forum/all/2023/01/27/top_academic_publisher_science_bans/#c_4610375]

      So how many bears have Russia sent into space?

      [https://mindmatters.ai/2023/01/large-language-models-can-entertain-but-are-they-useful/ ]

      "AI" has a place, such as in a Google search to recognise I'm looking for scientific research and so give greater weight to peer-reviewed sources, but for me to accept it's offered solutions in the way I accept solutions from (human) colleagues, it still got a long way to go.

  16. david 12 Silver badge

    Study provided known false answers like this?

    I've only glanced at the research paper, but it appears that the bulk of the study had to do with peoples use of known unreliable advice .

    As, "I'm right 80% of the time, and the computer is right 85% of the time, which answer should I go with?

    I didn't find all the information this article seems to be referencing, but perhaps the example answer, which appears to be incorrect for the example question, was one of a set of "wrong" answers that made up the "unreliable advisor".

    1. Black Label1
      Black Helicopters

      Re: Study provided known false answers like this?

      If you are trusting a computer but the computer do not trust you, then yes, you will have an "unreliable advisor", plotting your downfall.

  17. tiggity Silver badge

    If an AI recommended D I would ignore it

    But would ignore it if a person recommended D.

    As the best option is C.

    Given that there is no proof of ulcer rates in the statement - just the physicians unproven "It's clear that we suffer significantly fewer ulcers, per capita, than they do."

    Then C. "A person in the physician's country who is suffering from ulcers is just as likely to obtain a prescription for the ailment as is a person suffering from ulcers in one of the other two countries."

    Would give supportive proof that the concluding, fewer ulcers per capita statement was correct (Given "prescriptions for ulcer medicines in all socioeconomic strata are much rarer here than in those two countries").. Though even then fewer ulcers may not necessarily be correct.

    Its possible incidence of ulcers is same in all the countries, but in the country of interest ("I") the prescriptions given are more effective (maybe a better performing drug, or as is often the case with treatments, the population in "I" actually follow instructions & take whole course).

    This would give scenario in "I" where ulcers near 100% treated first time successfully, but in other countries the first treatment was a lot less than 100% success, so a lot more "additional" prescriptions needed as the condition was not resolved by the first course of treatment.

    It's really difficult to make accurate inferences without a lot of data, especially in the area of medical treatments * - and quite often you need data that might appear irrelevant but can be key

    D just shows 2 more countries with more ulcer prescriptions. But that is meaningless as amount of prescriptions is not necessarily correlated with incidence of the ailment. in some countries there may be pressure to not prescribe certain medications compared to others - classic example would be antibiotics. Antibiotic resistance is a big issue, some countries put (too little, but some) pressure to reduce prescriptions, others do not. So antibiotic prescriptions can vary greatly between "similar" countries.

    * No prizes for guessing that before I switched to IT, one of my earlier degrees had a big pharmacology element. The general estimates (back then, little in the way of decent quality data on this as lots of people lie about finishing prescriptions & lots of "old drugs" get binned rather than disposed of through the correct channels) of amount of treatments that are prescribed but the whole course of treatment is not followed used to be between 1/3 to 1/2 of patients, given human behaviour is stubbornly change resistant in some areas I would guess its still in the same area.

  18. Evil Scot

    As a software developer I am acutely aware of my failings.

    So you want me to trust software written by others?

    But so many people have had the "WTF Alexa/Siri/Cortana/..." outside of the industry that they can rightly mistrust AI.

  19. thosrtanner

    I can only presume the research paper was written by an AI (signed - another supporter of answer C)

  20. Grunchy Silver badge

    Eh? Ken Jennings already said he accepts the computer overlords.

    I think this whole article is a Chat-GPT plagiarization of The Atlantic circa 2011!

    https://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/

  21. pip25
    Facepalm

    Skepticism is pretty much required

    Given the current accuracy of AI systems (or, well, the lack thereof), blindly accepting what it recommends is plain foolishness. These systems, in their present state, are more usable for work that is easier to check for accuracy than it is to do by hand (boilerplate code, for example). Hubris has nothing to do with it.

    (ChatGPT chose "B", by the way. When asked why, it gave a reasoning that was self-contradictory, though the second part of it would have made sense on its own.)

  22. Filippo Silver badge

    Potentially, "AI" tools could be useful in any situation where a human is likely to make a certain class of mistakes, the machine is likely to make a different class of mistakes, and each of the two can easily spot the class of mistakes made by the other. Applying both would drive down both classes of mistakes, even when the "AI" makes lots of mistakes.

    Unfortunately, I can't think of many such situations off the top of my head. Driving would be perfect, if it wasn't time-critical - no time for comparing notes. Medical could work in theory, and I think that was the idea behind Watson, but I'm not sure human doctors can easily spot the class of mistakes made by the machine.

    The point, though, is that "AI" is a tool, and like all tools, first of all you need to understand exactly what it does and what it doesn't, before you have anything to trust.

  23. Anonymous Coward
    Anonymous Coward

    Human psychology may prevent people from realizing the benefits of....

    and for me, this currently applies to Smartphones, WiFi, iOT, Skype, Cloud, etc...

    On the other hand I do eventually use a lot of new things.

    e.g. I've recently upgraded a box to Windows 10 & converted the rest of my old PCs to Hyper-V guests.

  24. Anonymous Coward
    Anonymous Coward

    I see three guys who refuse to understand that DKE applies also to them and have an illusion of knowing better, i.e. savant idiots.

    Also they refuse to understand that there is no such thing as "artificial intelligence" and it's very questionable if it is even possible, in theory: Intelligence is something much more complicated than any "AI" systems nowadays and none of them have capability to be a proper AI, by the way they work. Actual AI would simulate the way brain works, but no-one has been able to do that, so far. Which isn't a wonder, no-one really knows how brain operate, as a whole: It's not simple even at single cell level.

    Neural network relying on material it used to "learn" isn't intelligence by any meaning of the word: It's a probability machine choosing from pre-determined choices with some arbitary criteria: That's automation, not intelligence. Increasing the amount of choices won't change fundamental status: Choose an item from a given list with given index. Some people call that a database, not AI.

    Other type of "AI" is what-if scenario builder built on decision tree and that's also pure automation: It can't invent anything new or choose anything outside of the options it has been given.

  25. Anonymous Coward
    Anonymous Coward

    " Human psychology may prevent people from realizing the benefits of....

    and for me, this currently applies to Smartphones, WiFi, iOT, Skype, Cloud, etc..."

    No, it more likely makes you conform because everyone else uses those too. See: Microsoft Office. Despite having no benefits or them being dwarfed by the downsides.

    Wifi is slow and 15-20ms lag while CAT6 is fast and has 0ms lag. Slow and leaks everywhere, but portable.

    Smartphones are primarily tools for tracking users for Google/Apple: Where they go, who they meet, what they do in real life or in the network. Anything user gets, is an afterthought. Some of that is useful, but there's a hefty price to pay.

    iOT would be a pretty toy if it weren't a free pass to intranet, i.e. local WiFi: There's no security at all and all of them are used to spy on the user. Applies also to anything "smart": They are smart only because that's the way they can spy user. No other *actual* reason: What user gets is an afterthought.

    Cloud is just someone else's computer, rental property. No more, no less. Benefits depend on how good you are at calculating costs and spesific use case.

  26. Anonymous Coward
    Anonymous Coward

    "In all, the researchers conclude that more work needs to be done to understand how human trust of AI systems can be shaped."

    So they not only suggest, but demand, more propaganda. That's what they mean by "shaping", literally. Obviously telling the truth is not enough.

    I see three guys who don't understand that DKE applies to them, too.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like