
Not enough data...
"... especially in domains such as medicine, where there is a large and increasing body of factual information."
In short, they did not illegallyinappropriately obtain more data?
"OK, the error rate is terrible, but it's Artificial Intelligence – so it can only improve!" Of course. AI is always "improving" – as much is implied by the cleverly anthropomorphic phrase, "machine learning". Learning systems don't get dumber. But what if they don't actually improve? The caveat accompanies almost any …
"... especially in domains such as medicine, where there is a large and increasing body of factual information."
In short, they did not illegallyinappropriately obtain more data?
Late favourite no 2 son had a very rare metabolic disorder in that he had only 47 % of an enzyme critical in the Krebs Cycle. Working through the standard differential diagnosis / tests process took 5 years to finally get a confirmed diagnosis. He was the first child in Sydney diagnosed, and the first seen in Brisbane with that condition when we moved there. After a year or so, other kids were diagnosed with what the boy had, and a lot faster in both Sydney and Brisbane. Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !
Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !
That would be easy to encode in an expert system - if the system designers were able to pin down what "something like this" actually meant. And that's the Achilles' Heel of expert systems: identification (and encoding) of the explosion of edge cases and hard-to-articulate intuitions that constitutes the deep knowledge of an experienced human expert. This is why knowledge-based systems hit a brick wall. We've known about this for a long time.
If Drs feel threatened by the system what's their motivation to use it? If they disagree what will happen in court if something goes wrong? Will the people producing the system sue critics or gag people who buy their stuff? In the long run I can see this being a big help (as long as it's fast, accurate and easy to use) but we are a long way off on all three points.
In the 1970s management prevailed upon me to to write an "imprinter"*** guide to diagnosing problems in a new O/S for which proven support staff were in short supply.
Before I started I told management that I thought good dump crackers were a relatively rare mixture of ability and wide experience. An unfiltered tranche of new graduates from various disciplines were unlikely to cut it. Nor were development programmers who had been arbitrarily shifted to support roles.
Basically it was a set of IF THEN rules and observations to navigating crash results. As I had a good track record in dump cracking on that O/S and others - it should have had some success. It transpired that most people lacked the background and ability to learn to use it to help to structure their diagnoses. They expected the answer to drop out automatically if they just followed the rules slavishly. There was also a factor that they didn't want to change the way they currently worked.
Some developers still wanted to use a raw core dump - rather than start with a tool that attempted to structure the relationships in the data blocks. Another tool showed how layers of merged patches overlaid each other on the original source code - that also gathered dust. They also wanted to go home at 5pm rather than stay with a thread through the problem.
Dump cracking remained a black art rather than a science.
***"imprinter" because I had recently read the sci-fi story "Profession" by Isaac Asimov.
I looked at this technology when it was expected to run on an IBM XP. It had some functionality and could make limited predictions for what we we looking at, which was predicting equipment failures by analyzing used lubrication oils. We were surprised that the spruikers of the technology were using medicine as their showcases, as we thought it likely that medicos would not want to give up their expertese - Apparently the technology would be viable because they had some data suggesting that many people preferred giving yes/no responses to a computer than consulting their GP for embarrassing or serious conditions.
Actually, having been involved with "AI" and Expert Systems in the 1980s, I've thought for the last year that all these new systems are just "Expert Systems Mk.1" with "big data", often inappropriately obtained and with little third party validation.
I don't see the big improvement in "Speech Recognition."
Also why is today's spelling checkers and grammar checkers not noticeably better to programs I was using 30 years ago on DOS and CP/M. Frankly rubbish. Proof reading still needs expert humans.
>Also why is today's spelling checkers and grammar checkers not noticeably better to programs I was using 30 years ago on DOS and CP/M. Frankly rubbish. Proof reading still needs expert humans.
Heh. Muphry's Law strikes again...
https://en.wikipedia.org/wiki/Muphry%27s_law
.
(
* is->are
* better to -> better than OR superior to
)
I don't see the big improvement in "Speech Recognition."
I see remarkable demonstrations of this in may locations, but when I phone my bank the stupid automated system is utterly useless at speech recognition - and I'm talking yes/no and phonetic letter (alpha, bravo, Charlie, ...) which ought to be piss-easy.
https://www.healthnewsreview.org/2017/02/md-anderson-cancer-centers-ibm-watson-project-fails-journalism-related/
$62M down the drain, no competitive bids, bypassing the IT department, not integrating with the Center's EMR system, and on and on. And the worst part - no improved outcomes.
Journalism played a role in all the hype too, so good on you Andrew for bringing some attention to this subject.
Many of those who tried to build the first airplanes failed because they made aircraft with flapping wings, like birds have. But it turns out that even though birds fly very well, imitating them is not the best way to build a machine that can fly. Same thing with the horseless carriage -- our mechanical horses don't have legs like real horses do. And submarines don't swim like fish do. Based on this history, I suspect that if we ever succeed in building a machine that thinks, it will not do so using the same techniques that humans use.
As an SRA working in a university CS department I got to go to the UK conference on VLSI and 5th generation technologies during the early 80s. (The dodgy memory doesn't recall exactly which year or what the true conference title was.) The stars of that conference were the lead researchers from the Japanese 5th Gen project which was heavily hyped at the time. I was far too lowly to be introduced to them, but got to eat and drink with some of the minions they'd brought along as an entourage. I asked them if they thought they'd actually succeed. The answer boiled down to "probably not, but we've got 5 years guaranteed funding and we only need to meet a few goals to get another 5 years". When it's fashionable, AI is great for funding.
This is the Big Problem with Big Data: it assumes that if you have enough data, you can make predictions. But there's no a priori reason to believe that's true... and it turns out, most of the time it isn't.
One crude example: one of the largest cloud providers hoped that looking at SMART stats from its huge number of disk drives would allow it to predict failure, and replace ailing drives before they failed. It turned out: none of the statistics was usefully correlated. The only thing that was a good predictor was that if two disks came from the same manufacturing batch, their failures would be strongly correlated.
And most big data is, sadly, like that: the data is just a lot of white noise, not predictive of anything.