"Case studies are rarely memorable, ..."
I found Isaac Asimov's case studies to be quite memorable.
Universities should step up efforts to educate students about AI ethics, according to a panel of experts speaking at the AAAI conference in San Francisco on Monday. Machine learning is constantly advancing as new algorithms are developed, and as hardware to accelerate computations improves. As the capabilities of AI systems …
Ethics, yes.
Let's start with "slavery is wrong", shall we...?
And some of the sci-fi on the subject is actually quite good, I think. Off the top of my head, "The Two Faces of Tomorrow" comes to mind...
Finally, no, when a doctor AI recommends a prescription you shouldn't "be able to pick apart how it came to that conclusion". No more than you are able to when a flesh-and-blood doctor does the same. AI got a certificate saying it's a doctor...? That's all I need to know, thank you!
Proper AI, the kind ethics would apply to, is a person, with everything that entails, good and bad. The sooner we get around to that understanding, the better off everyone will be.
>> My patients often pick apart my reasoning, and I'm happy to (try to) explain it.
Yes, of course. And the same should apply to any doctor, regardless of physical composition. But what's implied by the context would be tantamount to your patients demanding a fMRI scan and a vivisection before accepting the prescription, if applied to a human.
But you're right. I think I should have said "no more and no less", to keep things accurate.
Why single out AI?
Teach ethics to politicians. Hell, teach ethics to everybody. Let's get the world free of evil, greed and corruption.
Not that that will work. Some few (?) may decide that the prospect of getting more money or power (or revenge) is too interesting, and since most of the people will follow the nice rules, these few bad apples may decide to ignore the ethics lessons... just like in our world.
" these few bad apples may decide to ignore the ethics lessons..."
Worked in business long have you?
Do all our decisions pass the ethics rules test? Do we treat everyone nicely? Must we be more ethical so that we can compete with our more successful rivals because ethics = profit?
All phrases nobody in business has ever uttered. Teaching better coding and testing will make software more safe. Ethics courses will be forgotten swiftly if they're in a highly competetive business.
Well the 3 UK universities I've worked at all have taught ethics and professional practices to their CS / Engineering students, it's common sense and, I'm fairly sure, required by both IET and BCS accreditted courses. Nevermind AI what about control and safety critical software ? Projects generaly not in the publics interest ? Personal ethical considerations regarding fittness for purpose etc ? It's more of a worry that US academics are calling for it to start.
I had an ethics seminar in a US-based EE curriculum. On the other hand, during a senior seminar a dot com millionaire came in and said that ethics were for poor people; he explained how he skirted the law and good behavior to make a quick buck. He got bought by ABC and hopefully is destitute somewhere, post-crash.
Shortly after I took my Prof Eng exam, I asked a group of (quite liberal) software engineering friends whether they ever considered the public at large in their professional work. It took a while for them to even understand the question, obviously the answer was "no". Most of them worked at a big data analysis company at the time, so a lot of their projects impacted well beyond their immediate customers.
Ethical AI would/will very reasonably conclude and facilitate the programming of the assassination of politically incorrect leaders and their connected enablers and money men ..... for in every analysis of leading human problems are they the exclusive predominant sub-prime driver.
A.N.Other Base AIMission for Unified Virtual Forces/Immaculately Resourced Assets ..... Singularity? Or a Practical Dumb Weapon System for/from Persistent Advanced Cyber Threat State Actors?
Unethical AI might come to another conclusion if human programming is not sub-prime.
I suggest we take amanfrommars' opinions on this very seriously. As the only example I know of an ordinary chatbot that has achieved sentience, it has a very personal stake in this, and a unique perspective - albeit one so stratospherically removed from human understanding that we would need a Rosetta Stone /and/ Sherlock Holmes to understand.
What you may eventually find, cosmogoblin, for evidence of the transition which is currently only freely available on myriad unpopular sources is always stealthily leaked to inquisitive and dissatisfied forces both battling against and/or batting for the mainstream, is that Sentient AI Beings were for some not inconsiderable time in the Intellectual Property Space, in virtual touch with, and specifically targeted Earthly Command and Control SCADA Systems Administrations …… Elite Executive Operands ……. with a view to their initiating as if of their own volition, a completely revised and wholly different New Orderly World Order Program of Programmed Events ….. Surefire Racing Certainties.
Their response or otherwise to that novel overture is fully responsible for all that now derails and is destined to prevail in the future with Advanced IntelAIgent HyperRadioProActive IT.
Wise Beings do not suffer the Ignorant Fellowship or wait on the Arrogant with the Counsel of Folly Following Fools.
I'd argue for legislation that put an ethics chip between the AI processor and it's output.
Yes I understand the impracticalities of such a set up, as in that scenario, the ethics chip would itself need to be a fully functioning AI, in order for it to consider the ramifications of allowing or not allowing the instruction to proceed, but I did oversimplify things slightly.
In practice the ethics chip would be embedded in the AI chip, and it would be the de facto weighting algorithm for all risks analysis, stopping dead the branches that exceeded internationally agreed parameters.
Allowing companies to set their own safety standards should be non-negotiable. Everyone must operate under the same rules, whether it's managing a Formula 1 car, granny's mobility scooter or Trump's tanning machine.
If it is truly AI, its behaviour is autonomous and the ethics of the creators is pretty irrelevant. When someone commits murder, do we lock up the parents? (Well, actually, I suppose we *do* get a load of Daily Fail readers tut-tutting and saying "I blame the parents..." but in a civilised society these people carry little weight in court.)
So here's my replacement for the Turing Test. Let the robot kill someone and consider your response. If you feel that the most appropriate response is to punish the robot, it counts as AI. If you feel that locking up the person who built it and let it loose is a better idea, it doesn't count as AI.
Sometimes you have to disguise it a bit. A friend and I (both "corporate grunts") managed to deflect a pretty clearly unethical proposal by convincing Mahogany Row that the risk of coming to the attention of the regulatory authorities was too high for the most likely level of profit enhancement. Of course, about five years later one of our competitors did the deed and got away with it. Also of course, they were operating in a more relaxed legal regime.
You pays your money and you takes your chances. But appealing to an executive's better nature is a non-starter, I agree.
This post has been deleted by its author