Aren't all people a little schizophrenic ?
Especially those who have to telepathically contact their invisible friend a couple of times a day ?
Scientists have had a crack at using simple machine-learning software to make psychiatry a little more objective. Why, you ask. Well, rather than rely on a professional opinion from a human expert, as we've done so for years, why not ask a computer for a cold logical diagnosis? One benefit of using code as opposed to a …
I have come to believe that "perfectly normal" can be replaced by "acceptably crazy" without any prejudice whatsoever.
Sadly, the way it actually works is money.
Homeless, ranting on a street corner about the machines taking over?
Lock him up, he's a danger to society!
Rich, ranting on twitter about the machines taking over?
Sign him up for a lecture tour, he's humanity's saviour!
The elephant in the room is false positives, that they didn't address. If it was truly 75% accurate then it would diagnose 25% of non-schizophrenic people as schizophrenic. It is probably better than that but even if it's 99% accurate (optimistic!) at not diagnosing healthy people falsely it would still produce 10 false diagnoses out of 1000 scans of healthy people. And let's say the rate of schizophrenia in the general population is 1/1000 (probably less) then false positives would significantly out number true positives. The conundrum of Positive Predictive Value that makes medical screening dangerous.
The failure to mention false positives and the very small training set were also the things that leapt out for me. Given the relatively low incidence of schizophrenia in the general population then as it stands this is worse than useless. If the false positive rate is anywhere near 25% it would diagnose schizophrenia incorrectly far more often than correctly. Being realistic a test that achieves 74% accuracy in diagnosis in a patient group in whch roughly half has the condition is very very poor. It is more interesting for what it says about the physical aspects/associations/causes of the disease than in any way useful.
The other question not addressed is how do humans perform at this task, that is intrepretting the results of this paticular imaging protocol perhaps with some specific processing and visulisation? Do they do better or worse or can they not do it at all?
false positives would significantly out number true positives. The conundrum of Positive Predictive Value that makes medical screening dangerous.
That is a big problem in a population screening program but less so if the diagnostic test is applied to people who have presented with problematic symptoms and a doctor is looking for decision support in deciding on a course of action. Of course the prediction algorithm will simply have encoded the subjective diagnoses used to classify the training subject cohort but there may be some room for improving some diagnoses.
This type of screening shouldn't be used as a diagnosis, but it could be used as a filter to identify people who should get some additional screening.
Well, in theory at least...I doubt giving everyone a brain scan in college (when symptoms of schizophrenia tend to show up) will become part of a normal physical, but who knows? If it could be made more accurate and could be shown to predict schizophrenia before symptoms show up, it might help as at-risk patients and their doctors could be more prepared and begin treatment at an earlier stage.
Certainly worth more of a look, as a larger training set might increase accuracy, and a longer term study could show whether it is predictive or can only diagnose after symptoms have presented.
1. I have nowhere seen an indication that the falses were all false positives. They might as well have been all false negatives, although a split into false pos and negs is the more likely variant. Assuming a 50:50 split between pos and neg it would give a false positive to 12 percent. And it would "miss" the other 12 percent.
2. You're sort of assuming that this kind of brain scan is going to be mandatory for everyone. I doubt that anyone advocates this kind of method becoming even a routing screening in hospitals. It is just another diagnostical tool to be usedby medical professionals.
I can't help but be reminded of an early machine learning project that the US military had. They showed the system photos of NATO hardware and Soviet hardware (which shows you how old this story is). In the end the system was very reliable until someone realised what had happened is that the system had actually ‘learned’ to tell the difference between the good quality photos of NATO hardware and the poorer quality shots of Soviet hardware. The contents of the photo were irrelevant.
Then there was the one that was trained to spot tanks hiding under cover.
It was shown pictures of the same areas, with and without tanks, and appeared to be very successful it until they tried in the field.
It was worse than useless.
It turned out that the day they took the pictures without tanks was sunny, and the day with tanks, cloudy.
It was spotting cloudy skys instead of tanks. ☺
fMRI is best described as "how do you want the result"? It is a case where a small dataset is not sufficient.
The IGNoble Price 2012 for neuroscience proves this rather drastic.
Anything based on fMRI with a test base lower than a few million and sigma lower 5 is tarot reading from an expensive toy. fMRI is used by medical people, but the necessary evaluations have to be done on raw data by professionals in mathematics, statistics an computer science. Not by someone, who can use SPSS.
not quite as dark as other methods though. Twenty years ago I briefly worked on an attempt to use evoked potentials (brain waves) to do the same thing. I never found out whether the team as a whole finally 'succeeded' in their own terms, but the principle was fundamentally flawed due to individual variation masking the common factor of interest. However that didn't deter them from the apparent ultimate objective of a diagnostic helmet with two lights on it - green for sane and red for mad.
This was my thought. How certain are we that the 46 people all do actually have schizophrenia?
Its not clear from scanning the paper whether the error was mostly false positives or false negatives. If it is predominately false negatives that could just mean the computer is 100% accurate at detecting schizophrenia and the remaining patients have a different brain condition that presents the same set of symptoms and therefore have been misdiagnosed.
This is a moot point - as the original diagnoses (pl. sp?) are purely subjective it is always possible, perhaps even probable, that any individual diagnosis was incorrect.This makes the AI results as a comparative measure equally subjective and prone to "inaccuracy" ie not reaching the same conclusion as a psychologist.
In any such "interpretive" field, do we accept AI's computational results, where results are by definition subjective, as equal to the "correct", human derived conclusion?
27 000 is 30^3. Not exactly down to the neuron level, is it?
This is indeed "machine learning" in a very limited, highly mathematical way.
"Artificially Intelligent?" I don't think so.
Apart from false positives (or negatives) we also have the question of wheather the same results are part of other metal illnesses or disorders (illnesses are treatable, disorders have to be managed), of which there are a lot.
So it's a start, but there's a long way to go.
BTW all joking aside in most cases of schizophrenia the "split" is between the patients idea of reality and actual reality, often with the symptom of hearing voices.
The question is :
even if they managed to cure schizophrenia, how long would it take to get rid of the horribly popular misconception that schizophrenia is .......... Multiple Personality Disorder
( when people say that being in two minds about something is 'schizophrenic' )
This post has been deleted by its author
When one person suffers from a delusion, we call it a mental illness.
When millions of people suffer from the same delusion, we call it a religion.
When one person hears voices in his head telling him what to do, we call it schizophrenia.
When millions of people hear the voice of JHVH/Allah/Jebus in their head telling them what to do, I still call it a mental illness.
YMMV, depending upon what the voices in your head tell you to think.
"One benefit of using code as opposed to a psychiatrist is that its results should be consistent across all patients"
One little problem. Patients are not consistent across all patients. A good friend of mine is a forensic shrink. He deals with serial killers and similar. The only consistent thing there is that they are very good actors.
The human brain follows chaos theory, same as the weather. One can predict the weather with 75% accuracy by saying it will be the same as today.
When my wife complains that I never hear what she says, I ask how I am supposed to diferentiate her voice from all the others in my head. Get my attention first, then speak.
The most fascinating aspect of the disease for me is that if you live in a third world country without drug treatment - you may get better.