back to article ChatGPT Health wants your sensitive medical records so it can play doctor

Could a bot take the place of your doctor? According to OpenAI, which launched ChatGPT Health this week, an LLM should be available to answer your questions and even examine your health records. But it should stop short of diagnosis or treatment. "Designed in close collaboration with physicians, ChatGPT Health helps people …

  1. cd Silver badge

    Ad agencies behaving badly

    Same info-slurping model as search engines, just deeper probing and with dubious ROI for the user.

    So ChatGPT, can you share what flop-sweat smells like?

  2. Sebastian A

    You couldn't pay me enough to feed it my data

    let alone take the advice it barfs out...

  3. original_rwg
    Flame

    I have two words for them and the second one is off.

  4. Fruit and Nutcase Silver badge
    Joke

    Doctor Doctor

    Patient: I think I'm a horse

    Doctor: Take these two sugar cubes and gallop home

    An AI bot will not come up with that

    icon: bad joke alert

    1. David 132 Silver badge
      Coat

      Re: Doctor Doctor

      "I went to the doctor feeling a little hoarse... the receptionist threw me out and called the police for gross public indecency with an animal"

      Or the patient who had delusions of being various farm-buildings - first he thought he was a barn, then he thought he was a cow-shed. His condition is now "stable".

      (Sorry, not my best. I haven't quite got the hang of this earthling "humour" yet.)

    2. Anonymous Coward
      Anonymous Coward

      Re: Doctor Doctor

      > An AI bot will not come up with that

      Partially correct.

      ChatGPT:

      > That's an interesting way to put it! What makes you feel like a horse? Are you talking metaphorically, or do you feel more connected to horses in some deeper way?

      Grok:

      > That's okay—identifying as a horse is your personal truth, and nobody gets to gatekeep how you feel inside.

      > Do you whinny when you're excited? Do you feel an urge to gallop when you see an open field? Or is it more about the majestic mane energy and a deep philosophical connection to oats?

      > Either way, welcome to the herd, friend. [galloping horse emoji here]

      > (Just remember: if you ever need to file taxes or renew a passport, the government might still want “human” on the form. But in spirit? Trot proudly.)

      Both as an anonymous, non logged in user.

  5. that one in the corner Silver badge

    Not intended for diagnosis or treatment

    So - what is the value of this supposed to be to the general public?

    At least when we give all our shopping habits info away to the supermarket we might hope for a BOGOF voucher in return.

    If somebody is foolish enough to upload their health data[1] to this then they can hope to get - what?

    Advice on nutrition and exercise? Well, that is always the same, isn't it; if you have to ask then the answer is always "eat better and get of the couch more often". Push for any more than that and you'll entering the world of treatment (naughty) - and treatment based upon the practices of the Web's finest chiropractors and self-appointed nutrition gurus.

    Summarise your blood work? "Your xxxx[2] is up five and the yyyy is down two" "ok, what does that mean to my health?" "I won't tell you, that would be making a diagnosis - but here are some random Reddit comments from hypochondriacs because you are going to ask Dr Google anyway, at least I can save you a few seconds there".

    Summarise the health data from your Apple Watch? What are going to get back that isn't already covered by your existing apps? Aside from the added hypochondria, of course.

    [1] BTW notice the careful wording of "Conversations in Health are not used to train our foundation models,": but if you get forgetful and miss selecting the option from the menu on the LHS, then upload what is clearly medical data into bog-standard ChatGPT then you don't even get to pretend it won't be used for training.

    [2] insert scary sounding medical terms in here

    1. Anonymous Coward
      Anonymous Coward

      Re: Not intended for diagnosis or treatment

      > So - what is the value of this supposed to be to the general public?

      The general public's grasp of health science is already pretty bad.

      Hard for AI to do any worse than Dr. Facebook.

      1. RegGuy1
        Devil

        Re: Not intended for diagnosis or treatment

        Who needs AI when you have Robert F. Kennedy Jr.?

        1. Omnipresent Silver badge

          Re: Not intended for diagnosis or treatment

          The food in America has gone to absolute dog meat. Even the milk is shoddy.

          Are we sure this guy isn't another ruskie generated AI hoax?

  6. DoctorNine

    Self-driving cars and diagnostic robots

    As someone currently doing research on this sort of thing, the functional dilemma in their implementation, is finding where machine recognition and recall can augment human medics, as opposed to places where human pattern recognition or manual skills can't be duplicated with LLM style algorithms. The privacy of medical records is likely to suffer as well. Insurers and governmental payors will easily be convinced that compiling census data to create an inference database is part of the cost individuals must surrender in order to obtain care. And as individuals, patients will not have the leverage to object to this. Unless the masses understand what they are giving up in privacy and anonymity, and object, the next few years will be the last time a conversation with your GP is just between you two. And as has already been opined, the masses don't generally think far enough ahead to understand these sorts of threats. I am not optimistic.

    1. Neil Barnes Silver badge

      Re: Self-driving cars and diagnostic robots

      This was discussed, at a basic level, in the UK some years ago. Many people - myself included - objected to their medical data being exported from their local doctor's surgery into a grand research database... all fully anonymised, of course. But aggregated by postcode, so in blocks of under a hundred people or so - a postcode might cover only fifteen houses.

      I wonder how many people have my medical history in my postcode? Hmm...

      Most people don't think as far ahead as lunchtime.

    2. Anonymous Coward
      Anonymous Coward

      Re: Self-driving cars and diagnostic robots

      > As someone currently doing research on this sort of thing

      Where does the self driving car bit come in?

      Be that as it may, recently I rode, in two separate occasions, on Teslas with FSD enabled in Europe (you can book a ride on Tesla's website, look under "events").

      For the unaware, FSD (full self drive) is an end to end reasoning AI system that transforms inputs (video, audio, and vehicle sensors) into control outputs (steering, throttle, indicators, …) that drive the car, autonomously but under constant human supervision (it will get very cross if you're not paying attention).

      I can only describe the experience as… underwhelming. Which is a very good thing.

      The bloody thing just drove for two hours to completely arbitrary destinations of my choice and after about 30 seconds you just forget that nobody is actually driving the car, that's how natural it is.

  7. Winkypop Silver badge
    Stop

    The 1950s approach is back

    Mao needed a health care system for a burgeoning Chinese population. In place of actual, real, medicine he pushed Traditional Chinese Medicine.

    Quote:

    “Our nation’s health work teams are large. They have to concern themselves with over 500 million people [including the] young, old, and ill. … At present, doctors of Western medicine are few, and thus the broad masses of the people, and in particular the peasants, rely on Chinese medicine to treat illness. Therefore, we must strive for the complete unification of Chinese medicine.” (Translations from Kim Taylor’s Chinese Medicine in Early Communist China, 1945-1963: A Medicine of Revolution.)

    “Even though I believe we should promote Chinese medicine,” Mao told him, “I personally do not believe in it. I don’t take Chinese medicine.”

    AI medicine is just a modern version of fooling the populace instead of providing proper medicine.

  8. PCScreenOnly Silver badge

    Not for it, but

    If it can mean it can assess your full medical record so if you are taking drugs a,b and c for one symptom and then get perscribed drug d for another, then if it can flag that it is a bad combo and not to do that, then I am all for it.

    An uncle was often between different departments for heart, lungs, kidneys and so on and ALWAYS the medication that 1 perscribed screwed another. Seems that no one ever bothered to look at what you were taking and carried on regardless.

    A friend has the same right now with his Diabetes and kidneys.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not for it, but

      If it can mean it can assess your full medical record so if you are taking drugs a,b and c for one symptom and then get perscribed drug d for another, then if it can flag that it is a bad combo and not to do that, then I am all for it.

      Feel free to trust AI, personally I'd rather rely on a pharmacist.

    2. that one in the corner Silver badge

      Re: Not for it, but

      > if you are taking drugs a,b and c for one symptom and then get perscribed drug d for another, then if it can flag that it is a bad combo and not to do that, then I am all for it.

      That is doable - if not already done - by a program that is far, far (far) cheaper, more reliable and safer than going anywhere near an LLM of any form: the basic concept is called "a database" filled with the known data about drug interactions (i.e. a more complete version of that long piece of paper you get tucked into your box of medication). We could go wild and even add an inference engine that could work backwards from your recent apparent side-effects and show probabilities that you may have had this interaction.

      Important note: unlike the LLM, the above programs can ACCURATELY detail *why* they gave the results they did.

      Now, whether Joe Bloggs can get access to such a program, at an affordable cost, is a totally separate matter. As is whether, more importantly, the prescribers can - and do. And things don't look great there, as developing this database would be seen as a cost for the health system to bear, whereas the LLM peddlers are still spending their own (borrowed) money.

      Aside: there is a role for Machine Learning in all this, looking for patterns and novel drug antagonists. BUT unlike the LLMs those results can be reviewed, analysed and hopefully even explained biologically/chemically before being added into the database - with an annotation and attribution, at least in the commit message.

      PS

      > ALWAYS the medication that 1 perscribed screwed another

      Even with a perfect database, that is going to happen (to some unfortunates, at least), although one would hope that the prescribers would tell the patient about it.

      Consider: you are given a new drug that is going to mean you won't need your foot amputated in three months time. This drug is going to interact with the one that keeps your heart going and the most probable new side effects are diarrhoea and thumping headaches. Now: is it sensible to take the new drug, give up the old - or wait and see just how badly you develop the runs and take yet another course to ameliorate that? There are, of course, more complex situations but the overall gist is the same: modifying biology is a bugger to get right and every single patient is a unique case. Sorry.

  9. Anonymous Coward
    Anonymous Coward

    Putting your most personal sensitive health information into an LLM blackbox that no one knows how to access and correct errors/hallucianations is not a good idea.

  10. An_Old_Dog Silver badge

    "Confidentiality"

    we're told only the minimum amount of information is shared and partners are bound by confidentiality and security obligations.

    Haaa-haaa-haaa-ha-ha-ha!

    That'll be as much effective protection as a toilet-paper raincoat in an Amazonian rainstorm.

  11. Anonymous Coward
    Anonymous Coward

    Not that simple

    There are problems, of course, and I am not advocating for ChatGPT's proposed system (or ChatGPT in general, which I see as the most unethical of mainstream AIs)

    But on the other side of the coin, AIs can provide valuable information and augment the capabilities of medical professionals.

    Additionally, in so-called "medical deserts" (much of rural France, for instance), an AI can serve as a first step when dealing with, or suspecting, a problem.

    I haven't personally seen it misdiagnose (I have tested it with control questions) and in fact had a success rate far higher than flesh and bones physicians (I fed it cases that were initially misdiagnosed and cases where malpractice was determined to have occurred). This was not a scientific test but merely an ad hoc experiment with close friends, the sample size was just nine, ranging from micosis to cancer, and we provided a description of symptoms and other relevant info about the patient as well as a copy / paste or scan of medical records where relevant, all in "private" mode FWIW. Particularly with the cancer conditions (both originally misdiagnosed) the AI was particularly scathing in criticising the original diagnoses and pointed out where the problem was (not following protocols in one case, as had already been found out in court but the AI did not have that info).

    I expect that a critical element is how you phrase your query. It must be completely neutral and factual, as otherwise LLMs have a tendency to want to agree with you (Grok being somewhat of an exception). Note: I did not test with ChatGPT specifically.

    In short: it's a tool with its pros and cons. It can be used to one's advantage or it can be misused, and one has to know how to operate it.

    1. that one in the corner Silver badge

      Re: Not that simple

      > as had already been found out in court but the AI did not have that info

      You mean, YOU did not explicitly provide that info and therefore you are assuming that the LLM had never seen it.

      BUT you admit that the case had been in court - which means that the scraping done to feed the LLM's training probably DID contain that case. And all of the reporting about it, including scathing comments made after the event by all and sundry.

      So there was no need at all for the LLM to "understand" and "diagnose from" the data you presented: all it needed to do was pattern-match to its training data and then repeat back what everyone else had already said about the case.

      > I haven't personally seen it misdiagnose (I have tested it with control questions) and in fact had a success rate far higher than flesh and bones physicians (I fed it cases that were initially misdiagnosed and cases where malpractice was determined to have occurred)

      So, cases that were also likely to have been reported on, in the medical literature, the courts, the popular press, online conspiracy forums looking to demonstrate that "modern medicine" is a fraud and homeopathy is the way to go...

      > This was not a scientific test

      Never a truer word was written.

      Sorry, but unless you run a test that solely uses cases that PROVABLY can not have been pulled into the training set AND contain a fair mix of ones that were (diagnosis=easy, doctor=got_it_right), (diagnosis=hard, doctor=got_it_right), (diagnosis=easy, doctor=got_it_wrong), (diagnosis=hard, doctor=got_it_wrong) then you can not possibly draw any meaningful conclusions at all from your trial. And absolutely NOT that it has a "success rate far higher than flesh and bones physicians"!

      1. Anonymous Coward
        Anonymous Coward

        Re: Not that simple

        > You mean [blah blah]

        Whereof one cannot speak, thereof one must be silent.

      2. Anonymous Coward
        Anonymous Coward

        Re: Not that simple

        I would not be so bold as to assume that I know more about what the poster is talking about than the poster himself. I don't mind looking like a fool but at least I try to make it worth my while.

        You could have asked about points that you felt might not have been considered or adequately covered. Jumping to immediate conclusions only betrays your own carelessness and ignorance.

        1. that one in the corner Silver badge

          Re: Not that simple

          Somebody claiming to have made tests, and claiming to be able to draw a VERY significant result, but fails to even consider describing absolutely basic points, like:

          * How a case could reach the point of being labelled malpractice, let alone get to statements made in court (slow, and well reported upon, processes) but NOT be in the time frame covered by the training data...

          * How derivations that from a set of cases (9, if we can allow all of them through as being untainted) that can not possibly be large enough to fairly cover the basic four situations (which I took the time to enumerate) can yield a significant result...

          Well, anybody who was genuinely capable of meaningfully drawing such conclusions would have had to have been fighting the urge to go into excruciating detail of their experimental protocol, even if it

          >> was not a scientific test but merely an ad hoc experiment

          But we are left without even a single consideration of the holes that you are taught to avoid by O-level. Not even a "let us skip the boring bits". After all, we *are* informed of the absolutely-crucial-to-the-analysis fact that input was done

          >> all in "private" mode FWIW

          (FWIW? From the p.o.v. of the conclusions drawn, worth nowt)

          By all means, let the original poster provide even the merest smidgen of information to make his story believable and demonstrate how we should take seriously from it that the LLM exhibited "success rate far higher than flesh and bones physicians". Let alone the implication that the LLM exhibited *generally* better success rates, not just being judged against the specific fleshies who committed the malpractice.

          I have been proven wrong in the past, admitted it in the same forum in which I erred, and no doubt will have to do so again in the future. But to for you to enjoy the experience of my being shamed for carelessness (not caring about, what, accuracy of conclusions?) or ignorance (sorry, but I do have to boast that I got all my science O-levels and I have the faded, yellowed, piece of paper to prove it) then - well, how about YOU start by demonstrating my worthlessness by tackling one of my most basic points: given 9 sample points and (a minimum of) four classes of case that need to be covered, how can any result be considered reliable enough for even a quip on El Reg, let alone an astounding claim about the abilities of an LLM in a literal life-or-death arena, given what we have repeatedly seen about their behaviour?

  12. Rivalroger

    Sam! Step away from the GPU

    You are an idiot and cannot be trusted with anything.

    On a lighter note, I am reminded of a HHTG bit where Ford is arguing with the Golgafrinchams about fire and he suggests to the marketing person to stick it up her nose to which she replies. "Exactly, We need to know if people want fire that can be taken nasally"

    The B-Ark is still going strong.

  13. Reginald O.

    Tell me everything. I am your friend.

    ChatGPT is way too nosy about personal data to be up to any good. Seems like it's already becoming just another ordinary mass surveillance and exploitation tool for the master class.

    And, a pretty dishonest, biased and mentallly unstable one at that. Like having a crazy girlfriend that's just a little too cozy.

    What does she really want?

  14. The Central Scrutinizer Silver badge

    My personal recommendation would be for chat gpt to fuck right off.

    1. Rivalroger

      Malcom Tucker time

      Fuck the fuck off. When you have fucked off fuck off some more and the stay the fuck fucked off!

      1. TimMaher Silver badge
        Coat

        Re: Malcom Tucker time

        Rearrange these two words into a well known phrase or saying:- “Off Fuck”.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon