F@ck you, cancer
That is all. Go Watson!
IBM has found yet another use for its Jeopardy-winning Watson supercomputer, launching a new system called Watson Discovery Advisor to analyse scientific and medical research. The new cloud-based system can figure out scientific language well enough to know how chemical compounds interact and can also understand the nuances of …
Erm, no.
Most are closely guarded by academic publishers, who are pretty much the most disagreeable sort of parasites, apart from politicians.
Give them an arm, a leg and the defence budget of a small dictatorship, and you can read to your heart's content.
What gets me is that at a public University, taxpayers pay the author salaries for the papers produced. Then if John Q. wants one of these papers he paid for, he must pay again. And the cost is not for printing, etc.
Papers produced at taxpayer expense belong in the public domain...
What gets me is that at a public University, taxpayers pay the author salaries for the papers produced. Then if John Q. wants one of these papers he paid for, he must pay again. And the cost is not for printing, etc.
I also helped pay for dozens of tanks, missiles, etc but does the government let me play with any of those toys? Does it hell!
"A highly influential paper by Dr John Ioannidis at Stanford University called "Why most published research findings are false" argues that fewer than half of scientific papers can be believed, and that the hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. He even showed that of the 49 most highly cited medical papers, only 34 had been retested and of them 41 per cent had been convincingly shown to be wrong. And yet they were still being cited."
BBC Radio 4 listen again
Then there's Retraction Watch
See their FAQ So why write a blog on retractions?
> Until Watson learns to filter out fabricated data, badly designed experiments, or incorrectly computed statistics
Actually, I think it might be well capable of this naturally. Given the depth of information it will have, it should be able to judge when a conclusion is an outlier when viewed in the grand scheme things, or when there's a logically fallacy. Certainly, it should be far better at it than a human.
And, judging by some I've seen, most are of the 'DR' Gillian McKeith variety.
There seem to be a hell of a lot of people on the 'Alternative' circuit who release 'papers' and do a lot of lectures (PR jobbies) to the already converted.
They are published in journals that have a coverage that must reach, oooh, a dozen. Journals that are run by people who also host 'conferences' that are also PR jobbies.
The difference between these and the 'Doris'* circuit is waffer thin.
*Doris - collective name for mediums.
Allowing for the valid comments above about the quality of research papers.
From my (limited) understanding of how Watson works, is by injesting vast amounts of raw data and then drawing inferance (joining the dots?) between items, and thereby giving pointers to new avenues of reseach.
As pointed out in the main article, the sheer volume of papers, even if only 25% where solid gold research, could one individual be able to take on board that volume of information and work on it in any meaningful way?
Overall +1 for Watson.
What Watson does is a fundamentally different type of "thinking" compared to what a human does; as a result, we can reasonably expect to miss a lot of things that a human would catch, but also to catch at least some things that a human would miss. That's where the value is.
The machine will sometimes make mistakes that are obvious to a human (like ignoring real-world knowledge that humans deem too self-evident to put in the paper), but humans sometimes make mistakes that are obvious to a machine (like overestimating the correctness of a paper that shows a result you desire). A human using a machine has a good chance of spotting both types.
Exactly. The story here is not "Watson is doing a better job (than a human would do) of analyzing this corpus". It's "Watson is doing a different job". The ramifications, and value, of that are pretty self-evident to anyone who has experience with both scientific research and natural language processing.
The machine processes the corpus and finds hidden correlations. Human judges then examine that result set to determine which ones are worth further investigation. The machine isn't replacing human researchers, and this form of research doesn't replace existing forms. It's supplemental.
So expect to see this sort of research leading to a number of new and useful treatments. It's particularly likely to be useful for rarer forms that have relatively long morbidities, because those are under-studied (since pharmaceutical money follows the big case-loads) and there's more opportunity for detection and intervention.
That said, "cure cancer" unfortunately is a fantasy. It's not even a particularly meaningful phrase. "Cancer" is a catch-all for a vast array of conditions where for any of a large number of complicated reasons cytogenesis outstrips apoptosis. Cancer is a symptom of a body out of equilibrium; it's a fall from a thermodynamic state of grace. When it happens we try to push and prod things back into position, sometimes with great success; but short of magical technology (e.g. nanobots that continuously monitor and intelligently intervene in cellular processes) it won't be "cured" in any general sense.
But hey, like most everyone else I'm happy to see any progress on this front.
So if Watson plus human gives the optimal result all is good yeah?
Not when you consider GIGO: garbage in, garbage out. A common computing phrase which says, regardless of the correctness of the logic of the program, no answer can be valid if the input is erroneous.
Of the most cited papers, 49% of total (almost half), only one third are checked for correctness, and 41% are actually incorrect. The figures will only be worse for the remaining less cited 51% of papers, so the check rate will probably decrease and the incorrect rate will probably increase. Reverse these rates and you get 59% correct and two thirds not checked.
If close to half of your data is incorrect and two thirds is unverified then neither human nor computer are going to produce good data. Until the check rate increases dramatically, and verifiably wrong papers are either completely excluded or retested until correct, then this process is likely to lead to almost 50% error in the output data. However, we will get that 50% erroneous conclusion that much quicker.
As long as the testing is being paid for and/or passed as legitimate by vested interests, their stooges, or brain-dead believers in magic then the results have to be taken with at least half a pinch of salt.