Good luck to them, they'll need it. Automated cancer detection from mammograms is viciously difficult. Especially as you can't diagnose from an image. The images just help you identify people that will need a biopsy which will be the diagnostic method.
I worked for a company in the field and even with millions of mammograms available for research, detection of cancer was in the 'active research ongoing' bucket.
The two biggest things I would want to know about Google's model are
* Does it use the FOR PROCESSING images or FOR PRESENTATION
* How does it perform on images from BAME ethnic groups
The first question refers to how the mammogram and tomogram images are stored and transmitted. By default manufacturers apply their own 'secret sauce' to adjust the raw image to highlight certain structures for radiologists. Not only do these algorithms differ between manufacturers but they differ between models within a manufacturer. These would be the FOR PRESENTATION images. FOR PROCESSING are generally larger and about as close as you can get to the raw image. They contain the energy detected by the dector with only the most basic of processing done. Stuff like trying to remove the halo effect and other x-ray weirdness.
If the Google models were trained on FOR PRESENTATION images then it will fail as you will have to retrain the models for every single manufacturer and model out there.
The second question is important as so much of research like this is done on people of a white european ethnicity. There are differences in breast composition between ethnic groups. Such as the ratio of dense and fatty breast tissue. This has an impact on how easy it is to identify suspicious structures in mammograms and tomograms. For example increased dense breast tissue not only increases the chance of breast cancer but also masks cancerous structures in mammograms, making it harder to spot possible cancers.
So yeah, I wish them luck. They'll need it. It will be cool if they can make it work.