
What about..?
Diagnosis of cancer ultimately is left up to the medics. Paige Prostate only helps them better visualize potentially cancerous cells.
Well, that is all very good and very nice, especially since "increased efficiency" isn't fazing out the final check of flagged positive samples by a human. But...
What about the false negatives? Might be nice to hear something about that, right? I mean, I look at this from a clinical side, but I assume that you techies over here might have something to say about training this based on 30.000 images. Which to me looks like... pretty limited for a super duper automated system. And what kind of images? Like with "facial recognition", that also seems to show a "particular bias"?
I mean, image recognition in medicine isn't that new (± 20 years), nor is image classification. So it all comes down to how "smart" the AI is? Or how smart people are to put their faith in it?