back to article Aw, how sweet: Google Brain claims to clock diabetic eye diseases just like a proper doc

Machines can detect diabetic retinopathy – a leading cause of blindness from diabetes – from retinal scans to the same degree of accuracy as ophthalmologists, according to a paper published by Google Brain. Recent advances in machine learning and computer vision have boosted the potential for machines to aid in medical …

  1. bazza Silver badge


    The problem as always with something like this is clinical responsibility. If you make something that can tell whether or not you have some condition or other, it is saying either "yes, you have it", or "no you don't".

    The problem faced is that if your box is even slightly subjective (such as a neural network output), you cannot really afford to have it say anything substantive. So the answers it has to give have to be toned down to "maybe, go see a doctor", or "maybe not, go see a doctor". In which case, what's the bloody point of not just going to see a real, qualified human doctor in the first place?

    Not toning the answer down and giving a straight yes/no answer means you're accepting clinical responsibility for the accuracy of the answer, and the resultant liability for those occassions where your box's answer turns out to have been wrong. A false positive upsets patients, and may lead to damaging and inappropriate medical intervention. A false negative may kill them. If you've made claims of complete reliability, you take the blame for that.

    So whilst their system might have a strong performance from a statistical point of view, it doesn't amount to anything practically useful at all unless Google are actually willing to accept the liability for the system's performance. I can't see them doing that.

    The same's true for a human doctor, but they can get insurance cover.

    1. Pen-y-gors

      Re: Welll...

      Fair point, and there have been instances where screening programs have got it wrong.

      But rather than a yes/no or maybe/maybe not option, if the result is expressed as a probability then results which are close-ish to the cutoff point, and those which are 'yes' can be referred to a human for interpretation and confirmation. That would at least cut out the human work on the "absolutely not a hint of it" tests. And if you're paranoid, do spot checks on the machine ones as well..

      And to be honest, I suspect that the scans are actually assessed by some specialist technician in an office, rather than a 'proper doc'.

    2. Fruit and Nutcase Silver badge

      Re: Welll...

      As someone who's had these tests done for a few years, I'd say the initial image acquisition leaves a lot to be desired. Sometimes the images have been far from sharp, and, whilst I am not a trained specialist, I doubt early stages will be picked up. The only time I've been satisfied by the images was the last time - and what a difference it was to what I had been accustomed to. No surprise there. The amount of adjustments and repeats she did when taking the photographs was much more than at any of the prior occasions with different opticians and also at the regular NHS outsourced service. Part of the issue is not blinking when the flash fires, and the patience of the person doing the procedure. I shall be going back there...

      Hopefully the AI system can also be trained to detect substandard images and that will also feedback into the system for (re) training the image acquisition technicians, and that is what they will be, not an optometrist or other ophthalmic professional

    3. Anonymous Coward
      Anonymous Coward

      Re: Welll...

      If doctors can get insurance against bad diagnoses, why can't machines? The main problem is false negatives - probably they can get an arbitrarily low percentage of false negatives if they are willing to accept a higher number of false positives. No problem, as I'm sure all positives would go to a doctor for review before telling the patient anyway, but having to review only a fraction of cases instead of all of them would save the doctor's time.

      Doing it this way would give time for the software to prove itself and be further improved, so that it could eventually do the diagnoses itself (i.e. maybe someday there will be "an app for that")

  2. Pen-y-gors

    Tastless (tasty?) headline

    Kudos for the 'Aw, sweet' headline - up to El Reg's usual mildly tasteless standards!

    1. Anonymous Coward
      Anonymous Coward

      Re: Tastless (tasty?) headline

      You think they have a diabetical sense of humour?

  3. Anonymous Coward
    Anonymous Coward

    It would be interesting how the AI would handle my case.

    My first diabetes retinal scan gave me an all-clear for retinopathy - but they detected the onset of a "branch retinal vein occlusion". That soon haemorrhaged and had to have laser treatment.

    My occlusion is apparently caused by hardening of the veins. Where they cross over each other one gets its blood supply cut off by the contact pressure

    Hardening of the veins has many possible causes over your life: age; diabetes; (passive) smoking.

    My doctor and one consultant said it was unrelated to my diabetes - another consultant said it definitely was a side-effect of the diabetes.

    Subsequent annual scans remark on the laser scarring - but still give a clean bill for retinopathy.

  4. graeme leggett Silver badge


    At 97% sensitivity - it will miss 3 cases in 100. giving people a False negatives

    At 94% specificity - it will flag up 6 cases as having the condition when they haven't.

    Which is probably why the article gives before those figures "the algorithm had 90.3% and 87.0% sensitivity and 98.1% and 98.5% specificity for detecting referable diabetic retinopathy". A better tuning for eliminating those without the condition and for those possibly with it to be seen for confirmation.

    Though it would have been good for the article to give the positive and negative predictive values (PPV and NPV) which would have been a better predictor of efficacy of the method.

  5. Diogenes


    Given that scans were analysed by humans 3-7 times, what was the divergence among the humans

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like