Reply to post: Re: started in Jeopardy

Machine learning the hard way: IBM Watson's fatal misdiagnosis

Anonymous Coward
Anonymous Coward

Re: started in Jeopardy

You're absolutely right, but this is a subtle and contextual thing. For one there are many areas where we actually can explain what an ML model is doing and why. So-called "explainable" ML (or XAI for short - XML was taken) is a hot topic of research because in many areas it is a hard requirement that we be able to say why a decision was taken.

This is particularly relevant in areas that directly impact people's lives. Say, for example, why a loan was approved, why a fraud investigation was triggered or why a clinical care decision was taken. Explainability is a hard requirement there, and that requirement is built into legal frameworks like GDPR.

But in others it matters a lot less. I'll look at drug discovery again. Do we really care how a model developed its understanding of which drugs "might" work? We already have robust tests for understanding which drugs do work, and we have a long and illustrious history of using drugs without a full understanding of *how* they work. So does it ultimately matter when the model is able to apply itself hundreds of thousands of times faster than any human?

The answer is still yes, but it's a very different kind of explainability that's needed, more concerned with things like informing subsequent manufacturing pathways and outcome prioritization than justifying a decision.

But hey, shouldn't be that hard to explain right, after all it's just a bunch of pattern matching and brute force, or so el reg's enlightened commentards have so assuredly told us!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon