The Register Home Page

back to article 'It looks sexy but it's wrong' – the problem with AI in biology and medicine

Biomedical visualization specialists haven't come to terms with how or whether to use generative AI tools when creating images for health and science applications. But there's an urgent need to develop guidelines and best practices because incorrect illustrations of anatomy and related subject matter could cause harm in clinical …

  1. Anonymous Coward
    Anonymous Coward

    It's worse when AI Slop pretends to have medical knowledge

    It's one thing if AI makes up capital cities of countries or messes up basic arithmetic, it's quite another when it creates chaos in the medical space.

    "You should eat at least one small rock per day as rocks are a vital source of minerals and vitamins”

    AI doesn't know 'no' – and that's a huge problem for medical bots

    FDA’s artificial intelligence is supposed to revolutionize drug approvals. It’s making up studies

    1. Version 1.0
      Meh

      Re: It's worse when AI Slop pretends to have medical knowledge

      I've worked fixing humans with a few problems in the technical medical world for about 40 years now. I don't see AI as making very different decisions to peoples' decisions, but when people make a decision in virtually all technical worlds then the human experts review their decision and verify that it's accurate and correct, not just assuming that they are correct - but AI seems to normally just assume that it's correct. When people do that then we see it as a frequent issue ... for example being told "Take these pills to make you feel better" and an expert saying the next day in hospital, "Oh he took 20 pills, but you only need one to fix the problem!"

      AI errors often match people errors.

      1. Anonymous Coward
        Anonymous Coward

        Re: It's worse when AI Slop pretends to have medical knowledge

        Agree - the biggest problem with the use* of AI is in the assumption that it's correct. We often accept a statement made by someone acknowledged as an expert (say, when your GP offers you advice concerning a medical concern) but not when the same advice is given by a total stranger. Mind you, the blind acceptance of what comes over the computer screen isn't new (in that it's rarely checked against an independent or verified source), so accepting incorrect information from AI is just one more step in our evolution from "intelligence" beings, able to think, to little more than bacteria responding to environmental stimuli.

        * I say "use" as the mad rush to build bigger models, using more resources, to make bigger profits (which are at the expense of those unable to make said profits) may well turn out to be a bigger problem for life on this planet. Again, though, life will adapt to whatever mess we make of it, even though we may not survive to see it.

      2. Philo T Farnsworth Silver badge

        Re: It's worse when AI Slop pretends to have medical knowledge

        > AI seems to normally just assume that it's correct.

        Actually, as many others have pointed out, AI doesn't "assume" anything, since, at least at this point in its development, it can't.

        Just call it what it is -- a computer program. It may have a giant, ravenous resource consuming, environment destroying database/training set, but it's still just a computer program, no more capable of assuming anything than a Tamagotchi or a desk calculator can be.

        Just because it is programmed to make reference in its output in the first person doesn't make it one.

        1. David 132 Silver badge
          Happy

          Re: It's worse when AI Slop pretends to have medical knowledge

          Absolutely. Stop anthropomorphizing them.

          They hate it.

    2. Anonymous Coward
      Anonymous Coward

      Re: It's worse when AI Slop pretends to have medical knowledge

      Medic here. The scary thing is not the IA. But looking at people around me, from professors high up to juniors at the bed side, you don't want to know how many are guzzling the Kool-Aid. And can't be convinced to stop and show the bloody academic doubt they were supposed to be educated with. From what I see, it won't be long before the first class action will come along...

      1. Wang Cores Silver badge

        Re: It's worse when AI Slop pretends to have medical knowledge

        You say this but would a society dependent on it even know to sue without being told by the chatbot?

        I have to say I quite badly want the AI world they promise because it conspires to make the 15 people with actual skills a new elite, especially in the United States, where Gen Z has just now re-discovered "water-based cooking" on tiktok (what we know as making soup.)

        Imagine the gawping consuming masses watching a writer actually develop their project with the same slack-jawed awe as you would get demonstrating fire to cavemen.

        "woahhh you wrote that without claude/grok? woawwwwww."

        1. Rafael #872397
          Unhappy

          Re: "woahhh you wrote that without claude/grok? woawwwwww."

          That would be preferable to what we have now in some circles: "No way you produced that yourself without using AI!". To the highly suspicious and inefficient "AI detectors"!

          1. Wang Cores Silver badge

            Re: "woahhh you wrote that without claude/grok? woawwwwww."

            >That would be preferable to what we have now in some circles: "No way you produced that yourself without using AI!". To the highly suspicious and inefficient "AI detectors"!

            There's no greater backhanded praise than being accused of being a hacker when you're playing CoD and rolling over the opposing team. I don't think it's the same for writing, but that might be because I'm a closeted english nerd wot' went into electronics and IT to feed myself.

            1. O'Reg Inalsin Silver badge

              Re: "woahhh you wrote that without claude/grok? woawwwwww."

              You are assuming the accused wont be punished for it.

        2. Ken Hagan Gold badge

          Re: It's worse when AI Slop pretends to have medical knowledge

          "You say this but would a society dependent on it even know to sue without being told by the chatbot?"

          Sadly that is the wrong question.

          The correct question is "Will they know when to sue?" and the answer is "Probably not. They'll wait until the body count is large enough.".

      2. Doctor Syntax Silver badge

        Re: It's worse when AI Slop pretends to have medical knowledge

        "From what I see, it won't be long before the first class action will come along..."

        It'll more likley be the first malpractice cases that eventually put the brakes on.

        1. Sherrie Ludwig

          Re: It's worse when AI Slop pretends to have medical knowledge

          The legal knowledge ones have already been well roasted by several judges when they hallucinate citations.

          Sanctions with a side of possible disbarment:

          Mata v. Avianca:

          . A 2023 US case where lawyers submitted a brief containing fake case citations generated by ChatGPT, according to The Conversation.

          Michael Cohen Case:

          .

          In 2023, Michael Cohen's attorney submitted a motion with fabricated case citations generated by AI, according to NPR.

          Anthropic Case:

          .

          In a 2023 copyright lawsuit against AI developer Anthropic, a data scientist cited a made-up academic report, which was later traced back to an AI hallucination, says Epstein Becker Green.

          Georgia Divorce Case:

          .

          A Georgia appellate court overturned a divorce judgment after discovering that the trial court's order relied on fictitious case law from a brief prepared by one party's attorney, according to Farella Braun + Martel LLP.

          Mike Lindell Case:

          .

          In a 2025 case, a judge fined attorneys representing Mike Lindell for submitting a filing riddled with AI-generated mistakes, including fabricated cases, reports NPR.

          UK High Court Case:

          .

          The UK High Court has warned lawyers to stop misusing AI after fake case law citations were identified, reported The Guardian.

          1. Anonymous Coward
            Anonymous Coward

            Re: It's worse when AI Slop pretends to have medical knowledge

            OP here. I agree, but with respect... It is one thing for some lazy lawyer to produce more bullshit paper crap (except the invoice in the end no doubt). It's another to amputate your breasts because AI said so (and we can process everything much faster, so revenue goes up). Ooops, hallucination, so sorry, please keep calm and let's move on...

  2. CorwinX Silver badge

    A fundamental problem...

    ... with AI is that the results are always presented authoratively, confidently and often convincingly.

    Even when being 100% wrong or sometimes even making things up and actively lying.

    When a human is asked a question we hopefully have the self awareness to sometimes say "don't know", "not sure, sorry".

    But these things don't have any sense of self-doubt or sense of their own limitations and are programmed to spew out stuff anyway.

    There are things AI can sometimes do well at - especially analysing images and spotting something that may have been missed - but nothing they say should be taken as gospel without verification.

    But on a more positive note - the likes of Google Gemini using voice input is not half bad for simple things. As long as you verify.

    1. Anonymous Coward
      Anonymous Coward

      Re: A fundamental problem...

      The thing with Large Language Models is that they are not Liars, they are Bullshitters. Liars know the truth but usually have a plan that requires lying, Bullshitters don't care about the truth, all they care about is whether they come across as convincing.

      I suspect part of this is because of technical reasons, algorithmic limitations for instance. A big part will also be for commercial reasons. LLMs are convincing about a subject that the human talking to them knows little about. That's where confidence comes in. It helps to convince investors that LLM are actually intelligent if they are designed to come across as very confident. Doubt doesn't sell to the gullible.

      1. ITMA Silver badge
        Devil

        Re: A fundamental problem...

        "Bullshitters don't care about the truth, all they care about is whether they come across as convincing."

        Sounds like politicians and sales people....

        1. Anonymous Coward
          Anonymous Coward

          Re: A fundamental problem...

          I disagree. Few politicians are Bullshitters (Boris Johnson comes to mind), the dubious ones (there are many non-dubious politicians) are usually Liars because they know the truth but lying politicians are usually more popular, hence they knowingly distort the truth.

        2. Anonymous Coward
          Anonymous Coward

          Re: A fundamental problem...

          That is what he said .....

          (Basically ANYONE who wants YOUR Money in THEIR pocket ... and doesn't care how that happens !!!)

          :)

      2. Filippo Silver badge

        Re: A fundamental problem...

        >I suspect part of this is because of technical reasons, algorithmic limitations for instance.

        Yup. Let's say you have a training set for an engine for suggesting holiday destinations in Italy, with 3 elements. The first is a professional confidently recommending Florence to an art lover. The second is a professional confidently recommending the Dolomites to a hiking enthusiast. The third is a professional confidently recommending Sardinia to a beach vacationer.

        The elements are all different, but they have one thing in common: they are all confidently presented. Which makes sense, because they are all mostly correct and created by a professional.

        So, no matter what result the stochastic parrot spits out, it will be confidently presented, because there's literally no doubtful example in the training set. Even if it then randomly proceeds to recommend Florence to a beach vacationer. Please don't try to swim in the renaissance fountains.

        You could "fix" this by adding more doubtful examples to the training set, but because doubtful examples are much more likely to be wrong, you would also be adding wrong information, which would make the model more likely to make mistakes even as it's more likely to admit to them.

        Basically, this problem is unfixable, as it's a fundamental property of how LLMs work.

        1. jake Silver badge

          Re: A fundamental problem...

          "as it's a fundamental property of how LLMs work."

          Shirley you meant to type "don't work"?

    2. Dr Paul Taylor

      Re: A fundamental problem...

      results are always presented authoritatively, confidently and often convincingly.

      Same happens with every advance in technology.

      Student essays that are word-processed probably get better marks than those produced on a typewriter, which probably got better marks than handwritten ones.

      We have the word "scripture" meaning religious stuff. It somehow acquired authority, even though the word only means "written".

      Socrates famously hated books, because you can't argue with them face-to-face.

      1. ITMA Silver badge
        Devil

        Re: A fundamental problem...

        "Socrates famously hated books, because you can't argue with them face-to-face."

        You can use books in a face-to-face argument. Just make sure you get one of those weighty leather bound tomes to wield.

        1. jake Silver badge

          Re: A fundamental problem...

          "Just make sure you get one of those weighty leather bound tomes to wield."

          Personally, I've found a simple message on a PostIt with just the right signature to be far more effective.

        2. Ken G Silver badge

          Re: A fundamental problem...

          I'm trying to remember what Pratchett wrote about the massive book "How to Kill Spiders" being the most used but least read book in the library of UU.

    3. LionelB Silver badge

      Re: A fundamental problem...

      "Bullshit", "Liar", ... are anthropomorphisms. LLMs are doing exactly what they were designed to do: generate plausibly human-like responses to prompts, based on massive volumes of (mostly1 human-generated) training data. That is what they do; that is all they do - and they do it rather (too) well.

      Unfortunately, that is not how they're sold to the public, and consequently that is not the public perception of LLMs.

      1Modulo pollution generated by... LLMs.

      1. teebie

        Re: A fundamental problem...

        "generate plausibly human-like responses to prompts"

        So... they are auto-bullshitters

        1. LionelB Silver badge

          Re: A fundamental problem...

          Hmph. Speak for yourself.

  3. Rol

    Tower of babble

    In the same way, the tower, if constructed in ancient times, would have sucked up every resource and brought on the demise of humanity, we are hurtling toward that now.

    How much time will the great minds of our age waste flicking through the pages of this new age Tower of Babel to eventually go mad and throw themselves into the abyss.

  4. Tron Silver badge

    Three points.

    Does anyone trace the sources of these inaccuracies? A new field is born! AI forensics.

    quote: This illusion of accuracy can lead people to make important decisions based on fundamentally flawed representations.

    Just like in politics. They wear suits, they pretend to know what they are doing, they make promises, you vote for them, you get screwed.

    Rats with big dicks? Reminded me of the noble Japanese artform of shunga (erotic books), which always gave the blokes supersized willies. Generations of Japanese women may have suffered from the offline harm of acute disappointment in consequence. Random example: https://www.maynardsfineart.com/auction-lot/kiokawa-shozan-japanese-1821-1907-a-pair-of_AE61080E76

  5. Anonymous Coward
    Anonymous Coward

    online misinformation

    Picture generators lower the bar for disseminating visual bs, but drawing and Photoshop skills were not that rare anyway.

  6. Frank Zuiderduin

    Not really AI

    The Dutch 'toeslagenaffaire', referenced in a link in the article, wasn't really caused by an "AI-based risk-scoring system". It's true a risk-scoring system was involved, but that could hardly be called 'AI'. It was a set of manually created/entered rules. If you call that AI, then every computer system ever invented deserves that qualification.

    1. chivo243 Silver badge
      Stop

      Re: Not really AI

      AI = Almost Intelligent. Close but no cigar! Close only counts in Horseshoes and Hand grenades and of course Atomic Bombs LOL

      1. dlc.usa

        Re: Not really AI

        Apparently Intelligent is better (except for that minority that can tell the output is Actually Unintelligent).

  7. Anonymous Coward
    Anonymous Coward

    Had to giggle ...

    The cut kiwi fruit (aka Chinese gooseberry) in the article's screenshot illustration (C) of a cell is hilarious.

    I am reassured that I have retained sufficient of my marbles that my recall of my discarded human anatomy studies of five decades ago can instantly determine that the illustration (B) of the knee joint is utter nonsense. ;)

    Illustration (A) must be a Sontaran or Klingon skull - appears the sella turcica is missing and has few extra foramina (holes) and the "big hole" (foramen magnum) seems too far towards the rear.

    1. Giles C Silver badge

      Re: Had to giggle ...

      The cell drawing is more like a round cake box (or maybe a slice of overstuffed wrap) full of strange snacks although I think there is also a sausage and a very small peach (or the kiwi fruit is massive)

      No biologist here but I am sure cells don’t look like that….

      1. HuBo Silver badge
        Alien

        Re: Had to giggle ...

        Well, I for one am glad the genAI included a fresh kiwi in that compartmentalized TV-dinner-like human cell bento dish. It'd have been overly rich in fats, sugars, and salts without it, even with its many nuceloses.

        The genAI clearly values our health with this nutritionally balanced concoction, which I bet is delicious to top it all off, and will be a great help to extraterrestrial MDs striving to adapt their gastronomic practice to new alien worlds, in the distinctly parallel universes of extra yumminess!

    2. tfewster
      Facepalm

      Re: Had to giggle ...

      I know just enough to say that the patella is commonly known as the knee-cap, not the calf muscle that the diagram indicates...

      From that, I'd assume the rest is rubbish too. But I had to study it, as it looks similar to correct picture. And that's the problem with LLM BS, you waste so much time checking it that it's almost always easier to do it yourself.

    3. Benegesserict Cumbersomberbatch Silver badge

      Re: Had to giggle ...

      Also a big clue to AI generation - the total lack of activity in the frontal cortex.

  8. Gene Cash Silver badge

    That'll fit right in with American doctors

    They don't give a shit as long as your insurance pays up.

    In 45 years the only two competent doctors I've encountered are my dentist, who is Spanish, and my ophthalmology surgeon, who is Ukrainian.

    1. Anonymous Coward
      Anonymous Coward

      Re: That'll fit right in with American doctors

      Strongly disagree. Most of my doctors have been American, and most have been quite competent. One major exception was a Brahmin Indian, who seemed to think he knew everything even when shown otherwise. (Kinda like "AI".) Oh, and the American who was going through a messy divorce, whose bedside manner was awful; he soon was "no longer with the practice".

      1. Anonymous Coward
        Anonymous Coward

        Re: That'll fit right in with American doctors

        Heading in mind that 40% of UK medical students are creationists (so much for drug resistance) and, according to the BMA, 80% of doctors have learning disabilities (that sweet, sweet extra time in exams) I take anything a doctor tells me with a large pinch of salt.

        1. Anonymous Coward
          Anonymous Coward

          Re: That'll fit right in with American doctors

          Source?

          1. Anonymous Coward
            Anonymous Coward

            Re: That'll fit right in with American doctors

            "Source?"

            AI, obvs.

  9. Anonymous Coward
    Anonymous Coward

    Quelle Surprise .... not !!!! ... how many times do we need to be told 'AI' is DROSS a fiction !!!

    Over and over in numerous articles we are seeing the same issue !!!

    'AI' is being pushed into areas that it is NOT suitable for ... (Side issue ... What is 'AI' actually good & useful for ???)

    'AI' is a clever pattern matcher that 'knows' nothing BUT will give you a pattern for your efforts.

    The pattern is not 'THE' answer BUT 'A' answer (including made up answers) ... it is your problem to verify IF it is the answer you want !!!

    IF you have to verify everything you get from an 'AI' ... what is the use and purpose of 'AI' ???

    The lack of confidence in the output of an 'AI' is simply being ignored ... get 'AI' out there and HOPE the accuracy issue is solved later.

    WE the future users of this DROSS need to stand-up NOW ... before this nonsense is integrated into everything !!!

    Soon this will be in vital systems that you & I will depend on to survive ... the 'Computer says NO !!!' trope will NOT be a joke anymore.

    Ignore the greed and need for power & influence driving this at your peril !!!

    :)

    1. Anonymous Coward
      Anonymous Coward

      Re: Quelle Surprise .... not !!!! ...

      AI, or better, Large Language Models (as they are not Artificial Intelligence and most likely a dead end on the route to AI) are not just a mediocre technology, they are also an economic failure. It's a massive money pit with no path to break even (let alone profit).

      You'll like this, thorough and long, read about how the "AI" industry is essentially about one thing, losing money to pay NVIDIA: The Hater's Guide To The AI Bubble

      To be honest, I'm grateful that I'm not in charge of running NVIDIA because that is not desirable at all. They are having a few freak good quarters, which is nice for them, but it will only mean investors will expect continuous growth from an illogical high. And that is unsustainable. Look at what happened to once-skyrocketing Peleton as the pandemic lockdowns ended. NVIDIA should dedicate a lot of resources to preparing for a major collapse in revenue, make sure they are not starving the parts of the business that will keep if afloat in case they have zero sales of LLM GPUs, and probably keep a lot of cash on hand to weather the storm.

  10. SVD_NL Silver badge

    Childcare benefit "fraud" in the Netherlands

    >"this case involving an AI-based risk-scoring system to detect fraud and wrongfully accused (primarily foreign parents) of childcare benefits fraud in the Netherlands."

    I'd like to highlight that a lot of these parents still haven't gotten their restitution paid (it's been 5 years since they've been supposed to pay them), and apart from monetary issues, some parents had their children taken away by child protective services.

    Do not trust AI when the consequences are real.

    1. Benegesserict Cumbersomberbatch Silver badge
      Mushroom

      Re: Childcare benefit "fraud" in the Netherlands

      I read today that DOGE's next assignment, after implementing its AI-generated list of whom to fire (which included those who maintain the thermonuclear arsenal), will be an AI-generated list of laws to repeal.

      I've got money on it including the one that bans private ownership of nuclear weapons, as that breaches the second amendment.

  11. Phil O'Sophical Silver badge
    WTF?

    Justified criticism

    Satirical criticism by such public figures (that people may tend to trust more than ‘legitimate’ news sources) can throw into question the legitimacy of the scientific research community at large, and the public can come to distrust (even more) or not take seriously what they hear coming out of the scientific research community,"

    That's not due to the entirely reasonable satirical criticism, it's down to the scientific research community publishing results without checking them, and it is perfectly understandable that such behaviour results in distrust. The community needs to fix itself, not shoot the messenger.

    1. HuBo Silver badge
      Alien

      Re: Justified criticism

      Quite! And journals that publish the groundbreaking slop (after pear-review by expert fruitcakes) should be promoted to sub-field specialized issues of either the The Journal of Irreproducible Results, or the Annals of Improbable Research, based on a competitive bidding process focused on merit and exclusivity of clams ... in my humble extraterrestrial opinion (imheo).

      Thought-provoking research of such poignant insight that delves meticulously into commendably intricate showcases of pivotal phenomenonces should certainly not ever be just plain laid to waste!? ... </mockery>

    2. Sam not the Viking Silver badge

      Re: Justified criticism

      I agree with your conclusion but worry that there is an element in our society that are happy to muddy scientific-waters in favour of their absolute certainty. 'Keeping it simple' is all very well but actually, life is complex. It's only simple to simple minds.

  12. Norfolk N Chance
    Coat

    It's hard to see why we should be surprised at the outcome here, after overcoming the disbelief that anyone with a shred of common sense (never mind medicine!) would try this at all.

    The worst part of this isn't the immediate risk of damage - it's the insidious pollution of general human knowledge.

    I suggest collecting anything published before this nonsense existed - bizarrely charity shops are full of this stuff for next to nothing.

  13. JamesTGrant

    Probably one of the most documented diagrams in Computer Science since 1981.

    ‘Draw a diagram of the 7 layer OSI model showing and show the location of the IP address sections of an IP header’

    Resulting images are very pretty and complete garbage.

    1. Simon Harris Silver badge

      Something that Copilot never fails to get wrong in multiple ways:

      'Draw a circuit diagram to show me how to create an astable oscillator running at 1000Hz using an NE555 timer. The duty cycle must be 50%'

      Possibly one of the most drawn schematics on the internet, Copilot comes up with various incorrect schematics, incorrect component values (even though it does manage to find the formula to calculate them) and various incorrect breadboard versions of the circuit.

  14. BebopWeBop Silver badge
    Facepalm

    Despite experience of it, it never fails to surprise me how the combination of laziness and the wish to cut costs by removing experts from the equation leads to the types of behavious observed here.

  15. Herring`

    Make it stop

    Discussing these things in the context of writing code, my point was:

    - Yes, these tools can help expert developers as experts can tell wrong from right

    - If it replaces junior developers, where are the future expert developers going to come from?

    Also I am not convinced that automating the process of making shit up on the internet is a worthy goal.

  16. Blackjack Silver badge

    If something gives you consistent bad results and makes up things, then do not use it for anything that needs accuracy.

    Lies shouldn't be tolerated in medicine and science because that literally harms and kills people.

    1. FeRDNYC Bronze badge

      I'm concerned about the implication that there are contexts where lies should be tolerated.

      In what contexts, actually, should we accept being lied to without complaint?

      The obvious answer, or at least one obvious answer, is: "When being told a story" -- in other words, in any creative-fiction context.

      But that's just a bad definition of 'lie'. A fictional story isn't a lie, it's fiction. And even fiction has to be internally consistent, so a better definition of "lying" in the context of creative output is to violate the internal framework of the fiction. If a work of fiction contradicts itself, that's lying, and it collapses the fictional reality.

      And guess what? LLMs do that ALL THE TIME. Even when using them to generate pure fiction, their output has to be carefully checked-over for plot discontinuities, loss of narrative threads, and internal logical inconsistencies. So, even the contexts where it's "ok" to make stuff up, AIs still can't do that in a reliable fashion.

  17. Simon Harris Silver badge

    The enshittification of Google since AI

    Here’s a weird if useless thing Google’s done.

    Today I was thinking of a ghost story I’d read as a kid in one of the Sunday papers. Something in the style of M R James, but I have no idea who actually wrote it.

    So I typed a brief description into Google hoping it might find it. The AI overview returned a quite detailed description of the story which was more or less as I remembered it (it was about 50 years ago, so I was hazy on the details) and I thought *bingo* it’s found it!

    However there were no bibliographic details.

    I refreshed and got a similar, but not identical story, refreshed, another similar story.

    It turns out Google is just hallucinating stories based on my query rather than doing what I wanted (and what it would try to do before AI) which was to find the original story I was describing. The AI overview is completely useless to me!

    I wonder how many people just read the AI overview now and assume it’s real information and not a hallucination.

    (Yes, I realise l’m anthropomorphising with ‘hallucination’ - substitute ‘made-up bollocks’ if you prefer)

  18. FeRDNYC Bronze badge

    The survey takers also sometimes used text-to-text models for captions and descriptive assistance

    ..."image-to-text", perhaps? Or was it really referring to respondents who use LLMs to punch up bland description texts?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like