I really wish that this surprises me but it doesn't.
80 years of truly staggering amounts of soft power. Pissed away in just a few months. It would be hilarious if it wasn't so stupid and terrifying.
17 publicly visible posts • joined 31 Jan 2022
While people think the obivous winner here is Airbus, given their backlog Airbus probably haven't seen much of a benefit.
COMAC and their C919 however? I'd love to see what their pre-sales team is up to right now. They must be grinning ear to ear. Especially if COMAC can get certification from regulatory agencies like EASA and the FAA.
Yes it's voluntary. But passengers will also have it explained that no one will see the number in a way that can be tied to a person. To the point that when weighing the person the number will not be displayed to anyone present.
Between that and the context that this is for passenger safety. I would expect for most that will ally their fears.
Good luck to them, they'll need it. Automated cancer detection from mammograms is viciously difficult. Especially as you can't diagnose from an image. The images just help you identify people that will need a biopsy which will be the diagnostic method.
I worked for a company in the field and even with millions of mammograms available for research, detection of cancer was in the 'active research ongoing' bucket.
The two biggest things I would want to know about Google's model are
* Does it use the FOR PROCESSING images or FOR PRESENTATION
* How does it perform on images from BAME ethnic groups
The first question refers to how the mammogram and tomogram images are stored and transmitted. By default manufacturers apply their own 'secret sauce' to adjust the raw image to highlight certain structures for radiologists. Not only do these algorithms differ between manufacturers but they differ between models within a manufacturer. These would be the FOR PRESENTATION images. FOR PROCESSING are generally larger and about as close as you can get to the raw image. They contain the energy detected by the dector with only the most basic of processing done. Stuff like trying to remove the halo effect and other x-ray weirdness.
If the Google models were trained on FOR PRESENTATION images then it will fail as you will have to retrain the models for every single manufacturer and model out there.
The second question is important as so much of research like this is done on people of a white european ethnicity. There are differences in breast composition between ethnic groups. Such as the ratio of dense and fatty breast tissue. This has an impact on how easy it is to identify suspicious structures in mammograms and tomograms. For example increased dense breast tissue not only increases the chance of breast cancer but also masks cancerous structures in mammograms, making it harder to spot possible cancers.
So yeah, I wish them luck. They'll need it. It will be cool if they can make it work.
The worst bit is when they train their algorithms on the imaging presentation data instead of the raw data. Every manufacturer applies their secret sauce to the raw data before it's shown to a physician. Trying to highlight important structures while keeping out the noise. This means an algorithm trained on one manufacturer will only work for that manufacturer AND only at that time. As the manufacturers adjust these post processing things all the time.
You can get the raw data from the machines but that generally requires extra setup steps and those images are generally not stored in PACS as they're so much larger.
Whenever I see a study or press release about some magical new AI/ML algorithm and they say they trained on the presentation images I basically ignore it as the algorithm is useless in the real world.
I'm not saying physicians are perfect. Far from it. In general they have a better understanding of the patient's culture and situation than an algorithm looking at raw data.
As much as it sucks it's important to find a physician that you work well with. I know it's not often possible for people to try out physicians until they find the one they click with.
An interesting lesson I learnt from my work us about a measurement in mammography that's useful to help identify women who will benefit from extra screening. This measurement involves placing the woman into one of four groups based on breast density. The problem is currently the measurement is done visually and thus by the individual judgement of the radiologist. This means any two radiologists will only ever agree on the categorization of breast density about 60-70 percent of the time. And even the same radiologist will disagree with themselves if they categorize the same image a couple years apart. With modern image processing we can build a repeatable and physics based measure of breast density. But you'll still get physicians that disagree with the measure as they missed some important part like the thickness of the measured breast. SaMD is a really interesting field.
Nope. Most physicians are under absolutely disgusting workloads so would love to have a tool that can lighten that load. However it has to basically work out of the box as they do not have the time to baby the system or completely relearn the entire workflow overnight.
Yes there are some fears from some generally older physicians and that's understandable. But the current and near future systems are so limited that it won't happen. And even long term you'll still need someone that can translate the output of these systems into something that the patients understand and accept.
Just as code automation and high level languages have not replaced the need for software developers. AI/ML SaMD will not replace physicians.
In the company I work in Watson is generally hated for making our lives needlessly difficult.
AI/ML in medical software has a role and if done properly is amazing to see as it performs as good or even better than humans without getting tired. BUT, it is no panacea and has very important restrictions to keep in mind at all times. As long as you stay within those restrictions it's brilliant.
Also there are basically no serious companies that have products out right now that advertise as being diagnostic. Diagnosis is disgustingly complex even before you start thinking about liability. The company I work for makes it clear as part of our regulatory approval that we are only advisory. We can help highlight pieces to information that are likely to be useful. But the physician must use their training, experience, and knowledge of the patient and their situation/culture to make the final set of judgements.
The biggest area that Software as a Medical Device (SaMD) is going to be useful in the short to medium term is to help identify interesting areas or data for physicians. Being a second pair of eyes to help catch stuff they may have missed due to fatigue. Meaning they can spend more time on the hard stuff and get more people the care they need. Especially in the sub specialties.
Since starting my current role I've learnt a shocking amount about medical software. When done properly it truly is world changing. Watson was not done properly.