back to article Google helps develop AI-driven lab machine to diagnose Parkinson's

A robotic system armed with AI-powered cameras can grow and image skin cells from test tubes to diagnose Parkinson's disease with minimal human help, according to researchers from Google and the New York Stem Cell Foundation. Parkinson's disease is estimated to affect 2 to 3 percent of the population over the age of 65. Nerve …

  1. RyokuMas
    Big Brother

    Would you trust it?

    "Engineers at Google Research pushed the system one step further by installing AI software trained to diagnose Parkinson's disease from skin cells..."

    ... and send the subject's genome back to Google for profiling - without their knowledge or consent of course.

  2. l8gravely

    What are the false positive and false negative rates?

    Fascinating stuff, but 79% is just too damn precise a number. And what is the false positive rate of the diagnosis? And the false negative?

    I love leveraging computers for doing the boring repetitive stuff, but we also need to understand the limitations.

    1. badflorist

      Re: What are the false positive and false negative rates?

      If believing the numbers, after training any error is likely to occur at 21%.

    2. ya fishy user name

      Understanding the limitations

      Perhaps you need to start with the limitations of the current "system". I know someone with Parkinson's who was told by 2 doctors (not PD specialists, though one was a neurologist) that they did NOT have PD. (Try getting an appointment with a PD specialist after that - neither the PD specialists nor the insurance companies are going to make it easy to get an appointment then.) It was more than 6 years getting a diagnosis and beginning treatment, by which time they were very sick indeed.

      Don't like 79%? I think it beats the guess work of the current process.

      Worried about false positive/negative? Go for a 3-test routine.

      1. Alan Hope

        Re: Understanding the limitations

        I presume the presentation was significantly atypical in some way. The clinical diagnosis and management of PD is long-established medicine and familiar even at student level.

      2. Manolo

        Re: Understanding the limitations

        PD is always a "diagnosis of probability".

        The only way to be sure someone had PD is a post-mortem brain analysis.

        We have a lot of criteria to rule out someone has PD, but no test to unequivocally make the diagnosis.

        So that will be a limitation of this system as well: it will be trained on clinically made diagnoses.

        And we do know that in the nursing homes around a third of PD diagnoses later turn out to be incorrect.

  3. Korev Silver badge


    Other than Google doing it, what makes this so special? Many industry and academic labs all over the world do this kind of experiment every day...

  4. Anonymous Coward
    Anonymous Coward

    Hope this works better than their other medical projects which get hyped up then vanish. Always lots of noise to start with then silently stopped, but only after feeding all the personal data into the Google machine.

  5. msobkow Silver badge

    Colour me skeptical after all the fanfare medical AI announcements that were debunked or worse over the past decade, the worst of which being the whole Watson effort. I hope they're on to something, but I expect down the road we're going to find the model was dangerously flawed yet again.

  6. Alan Hope

    The assessment of Parkinson's Disease based on the pill-rolling tremor, a festinant gait, muscle stiffness etc is perfectly adequate for the diagnosis and to guide the medical management of this awful and incurable disease.

    Clever, but changes little for sufferers.

    1. Manolo

      "perfectly adequate for the diagnosis"

      No, it is not.

      See my other post.

      There are more extrapyramidal movement disorders with similar symptoms, but of different etiology.

    2. CrazyOldCatMan Silver badge

      Parkinson's Disease based on the pill-rolling tremor, a festinant gait, muscle stiffness

      Sadly - no. My father had PD (eventually died from PD complications) and, at least initially, had none of those symptoms. He discovered he had it in follow-up tests after a TIA.

      He was an industrial pharmacist and spent the next 20 years putting his skills to use in PD management (especially the drug regime and timings) - he ended up doing lectures for PD specialist nurses so, hopefully, his testing (mostly on himself!) will continue to bear fruit.

      Another elderly gent that I knew also had PD - in his case it was very, very mild and only progressed very slowly. As others have said - the presentation can be extremely variable and the methods used to diagnose one can't necessarily be used to diagnose another.

  7. Anonymous Coward
    Anonymous Coward

    Seems even the researchers noticed that there might be a risk of overfitting using a deep model analysing such a large feature set with a teensy training set of 37 samples.

  8. Stuart Dole

    Interesting, hopeful...

    Well, like "fishy" said, diagnosis it hit-and-miss. My wife went to several specialists, until we finally got a "movement disorder specialist" who was able to nail it. Those who say "no big deal" haven't been there, I'm guessing. Each case of PD is different - and as of yet there aren't any reliable biomarkers. Once we get a good biomarker - a lab test of some sort - then that can become a way to dig deeper into the mechanisms of the disease. It's really complicated...

    (Sherlock, because it's complicated.)

  9. andrewj

    The description here, which is straight out of the press release, does a poor job representing the study. What they did is to take skin-derived fibroblasts from Parkinson patients with one of two very specific gene mutations. They then grew them up in a dish and trained a neural net to tell the difference from microscopy images. So the take away is that a) it's detecting influence/presence of a mutation not Parkinsons - in skin cells, it's not even in brain cells; b) while interesting it will never be used clinically - 79% accuracy is not useful for screening given the false positives. Also, with only 91 samples using internal cross-validation (no external validation data) the model is likely overfit.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022