Role of humans in a semi-automated detection system
If you have a semi-automated cancer detection system then you need to pay careful attention to the job of the humans, otherwise you can render them useless and possibly kill more people than if you had not used the system at all.
Seems counter-intuitive - how is this possible? Apologies for the long explanation:-
Imagine a haystack that contains 1) some shiny needles, 2) some not so shiny needles, 3) some needles that could possibly be mistaken for straw, 4) quite a lot of straw that looks like needles 5) a massive load of straw that is obviously straw.
Any needle that is not found in time will go rusty.
Up till now you have had humans looking for the needles - a mind-numbing job but one that requires high qualification. They've been able to do it, kind of, so far, but it would be great if we could improve the situation.
Now you get a machine to find and remove the shiniest needles, and the not so shiny needles.
You are left with needles that look like straw, straw that looks like needles, and a load of stuff that is obviously straw.
This leaves the humans with the job of sorting through a load of straw to find a few needles that look like straw.
Those needles will probably not be found, and will go rusty. And the highly qualified humans whose mind-numbing job has just got far worse will go mad or leave.
Better for the automated system to remove what is obviously straw and leave the interesting cases for the highly qualified humans.
The proof is in the measuring: does the rate of cancer deaths go down when system X is used?
We would normally measure this for a fully automated system anyway, but my point it's even more necessary to measure this for a semi-automated system given the temptation to assume adding flashy computers is bound to be better than using humans alone (also consider the high staff turnover in such a semi-automated system).
</rant>