
Google and ethics
Bwahahahahahahahaahaa! Who do they think will be impressed?
Google's controversial DeepMind has created an ethics unit to "explore and understand" the real-world impacts of Artificial Intelligence. The DeepMind Ethics & Society (DMES) group will be comprised of both full-time DeepMind employees and external fellows. It will be headed by technology consultant Sean Legassick and former …
under meaningful human control, and be used for socially beneficial purposes"
Google are on the bleeding edge of big data AI application because they see it as the best way they can further monetise the simply enormous amount of personal information they collect from everyone.
The deductions AI could make from the huge amount of low quality data they collect are scary much scarier than the idea that computers and AI will become our masters. We need to be protected from google not AI.
"socially beneficial" my arse.
This post has been deleted by its author
Because there's not enough money to develop anything for the benefit of people who won't be paying for the service.
The real question is why does NHS not have the expertise to do this in-house, to which the answer is obvious : not enough budget to retain staff that are competent enough.
The day NHS has the budget to advertise for a Big Data technician with top qualifications that pays more than private industry is the day you are loudly complaining to your MP that healthcare is costing you way too much on your monthly salary.
There's really no way out of this.
@ Shadmeister
"A few questions from this - why isn't the NHS or another health service designing, implementing, testing developing etc., and AI doctor ?"
The NHS isnt doing it because it wont work if they tried. This is a (dis)organisation that vacuums money for PR so it can just about keep some people believing it is the best health service in the world. Their technological capacity is somewhat amusing if you know someone on the inside (or even try to make an appointment).
And while it might suck for us patients it must suck for the doctors and nurses inflicted with the monster.
This post has been deleted by its author
There was a recent article in El Reg:
https://www.theregister.co.uk/2017/09/25/ai_in_medicine/
- about why this doesn't really work. One of the main reasons is that experts are really bad at explaining why they came to a particular decision. So, if your expert system training is no good - the system makes incorrect or low-confidence decisions - not something you would want for a medical diagnosis.
Phi.
This post has been deleted by its author
An AI GP is easy. Just a random number generator picking between three options "come back and see me if it doesn't get better", "it's a virus, go home and rest" or "call an ambulance".
What we need is AI hospital receptionists, then we can eliminate GPs and get patients in front of competent health professionals (be they nurses, physios or specialist doctors).
For a legal department this might be a synonym for "learn to drive a coach and horses through". Call me cynical, but for an ethics unit like this, it could mean "shape the public debate". Whatever becomes the established consensus of what is or is not acceptable in the world of AI ethics is clearly something which could have a big effect on the future profitability of Google. So why wouldn't they want to set up some well funded big hitter that could nudge the goal posts in the direction that they want.