About as effective as us defeating the Surveillance State by reading 1984 and the Animal Farm.
A group of researchers working for National ICT Australia reckons computer science courses need to look at artificial intelligence from an ethical point of view – and the popularity of sci-fi among comp.sci students makes that a good place to start. As the research team, which included NICTA's Nicholas Mattei, the University …
About as effective as us defeating the Surveillance State by reading 1984 and the Animal Farm.
But reading those books are a good start towards that goal! Add also "The First Circle" by Aleksandr Solzhenitsyn (which also includes a good demonstration of how not to manage a R&D project).
Thing is, most people haven't read 1984 and Animal Farm, which might have something to do with their acceptance of "the Surveillance State".
To be fair, Animal Farm has little to say about surveillance.
One problem with Nineteen Eighty-Four1 as an object lesson is that it depicts a totalitarian police state. That makes it apropos for, say, North Korea or the former East Germany; but the most successful contemporary surveillance states are the liberal-capitalist ones, which prefer the carrot2 to the stick.
For those societies something like Brave New World rings closer to home, though frankly it shows its age. I can't think offhand of a really ideal text for this purpose.
1Orwell hated the title being set in digits. He didn't like the title at all - it was forced on him by the publisher - but he particularly didn't like the semiliterate rendering of it as numerals.
2And more generally the subjective operations of power, such as interpellation, the management of desire, foreclosure, &c.
Michael Frayn, 'The Tin Men', 'A Very Private Life'
John Sladek; many but especially 'Tik Tok'
John Wyndham 'Compassion Circuit'
Manuel De Landa 'War in the age of intelligent machines'
Stanislaw Lem 'Summa Technologiae
Lots of movies - 'Robot and Frank' is good.
Lots more to be found at http://www.technovelgy.com/
Do projects with http://fffff.at/about/
Fun, but agree with above, unconvinced of real value.
I think his first and "zeroth" laws are fundamentally the most important.
They'll also get completely ignored. Robots have far too many tempting features for authoritarian and quasi-authoritarian regimes to ignore.
In addition, who defines "harm"? Are you "harmed" if your robot hands over evidence to the authorities of you driving at 78mph at 2am on the M1? Is humanity "harmed" if the entire Middle East is turned into a glass car park and repopulated with prison inmates from the West, a la Australia's settlement?
Too many seemingly objective principles are subordinate to subjective judgments at their root.
A subjective interpretation of the Three Laws meant that there would often be a grey area where they met or overlapped. Many of his Robot stories explored this issue.
Perfect for Ethics courses I would have thought, but perhaps less perfect for stopping Arnie in his tracks.
Asimov's robot stories mostly deal with a fairly narrow ethical issue, vis-a-vis AI: how a simple deontological moral scheme can break down under messy, real-world conditions. They're excellent at raising problems in that arena and as intellectual exercise (as well as being good reads), but the various other texts mentioned so far generally avoid those sorts of simplifying assumptions in favor of poking at a big messy tangle of ethical problems.
Asimov's kind of the pragmatist or logical positivist of AI ethical philosophers.
That doesn't mean he shouldn't be included in the discussion, but it might explain why his name wasn't on the original list.
There might be soemthing in this although not from an AI perspective.
Society is manifesting it's true colours via the likes of Facebook, Instagram, Twitter etc. Conversations are boiled down to 140 or so characters, tiny snippets of miselading information and images of cats...
If society is to be saved then it has to re-start educating itself on something other than the constant stream of drivel upon which it currently feeds.
That point is quite well fictionalized in Dave Eggers' The Circle. And yes, I do think all those people who read "1984" in the '80s were influenced by it, to a degree. Certainly, where I come from, it was considered almost mandatory reading, and was quite often cited in public discussion or in the press. Sadly, that has become less frequent.
Ray Bradbury had something to say about this too.
See "Fahrenheit 451" with its thought police and the suppression of all fiction apart from Hemmingway-style realism and all entertainment apart from mindless 24 hour soaps.
For those who can't or won't read there's a passable film version....
"of all fiction apart from Hemmingway-style realism"
Wasn't 451 the Bradbury story where all books were banned? (And burning them was the job of the firemen). He had another story where only non-realist fiction was burned... I think it was (google google) "Usher II" (part of the Martian Chronicles).
This post has been deleted by its author
toasters that talk, an alarm clock that follows you everywhere, sympathising with 'white goods' as no-one else really recognises thier worth (and thus getting the fridge to rat on other devices)
Smith turns it in to everyday stuff, not 'shock horror' - I guess it's further down the line than a lot of stuff as the chatting between meat and machine is commonplace.
Machines may have differing ethics than meat - but which is more 'human'?
"toasters that talk" I feel obliged at this point to mention "Talkie Toaster" from Red Dwarf :
"Howdy doodly do. How's it going? I'm Talkie, Talkie Toaster, your chirpy breakfast companion. Talkie's the name, toasting's the game. Anyone like any toast? "
Frankly I think I prefer the Terminators thanks, although I think Talkie is a much more likely scenario, along with the kind of products produced by the Sirius Cybernetics Corporation from Hitchhikers Guide to the Galaxy.
My recommendation for a good book that would help to address the generalized fear of AI:
'The Theory and Practice of Circuit Breakers'
Another good one is:
'Kicking out Power Cords, for fun and profit'
Not to mention:
'Paintball - targeting HAL's big red glowing eye from 30m'
Clearly this 'think' Tank is drinking or smoking whatever grants it gets. its members certainly don't come round to the Reg much where the denizens have been armed with this type of knowledge and throwing it out there while figuretivly ducking down behind the baricades.
But guess what? The people who will enivitably will be responsible for any 'skynet' either don't read those types of books or read them, then do it anyway having not gotten the message.
Covered a lot of the issues of genuine AI - including AI with intelligence vastly beyond human - in his Culture novels. It is, perhaps, noteworthy that it is implicit in his treatment that the main means of enforcement of "morality" in his universe is peer pressure. In fact, one of his characters in "Player of Games" when tempted to do "wrong" notes that there are no effective dincentives EXCEPT that he would no longer be able to participate in society as he had before.
That guy 'Murphy' has nothing on 'the law of unintended consequences'...
Read these works of SCI-FI and see if you still think "A.I. in a box" is a good idea.
Harlan Ellison - "I have no Mouth, but I must Scream" - Built for war, AI goes insane and kills all but 5 humans, makes them virtually immortal & then tortures them for eternity.
Dean Koontz - "Demon Seed" - AI traps scientists' wife and impregnates her with a humanoid life form of its own design and mind so it may live.
D.F. Jones - "Colossus" - Yankee & Soviet AI's get together and figure this whole cold war business is nonsense; they decide to optimize humanity and avoid wall altogether. Step out of line and you get nuked.
You do know those are works of fiction, right?
Certainly I'm all for reading fiction; one of my degrees is in literature, and I nearly got a PhD half in the subject. And speculative fiction, done well, is excellent grist for the mill. But it doesn't constitute a compelling, logical argument about the outcome of some process; unlike history, it doesn't even constitute empirical evidence of past behavior. A good Bayesian reasoner would still take it into account, but to magnify it into a blanket rejection of an entire technological regime would rather appear to be a category error.
(And how does "AI-in-a-box" differ from "AI" without the qualifier?)
Charlie Stross does an outstanding job of near future computer issues in "Halting State." It is full of enhanced reality used by cops on the beat, Chinese hackers, mmorpg banks, and self driving cars. His sequel. Rule 34, fad lots on 3D printing and AI but isn't quite as much of a fun read.
This proposal, like so many, founders on the shoals of the undergraduate schedule. With 8 semesters or the equivalent, the curriculum is already packed full of the classes deemed most important by practitioners in the field, or general-education and breadth requirements, or the prejudices of accreditation boards (such as a laboratory-science requirement). Sticking not only another1 ethics course in the list of requirements, but one so narrowly focused at that, is very difficult to justify.
A better option would be to encourage CS programs to work with Philosophy and/or PoliSci departments to develop Ethics minors and double-majors, for CS undergrads particularly interested in the subject. Minors or double-majors in mathematics, EE, business, design, and so on are already common for CS students; it doesn't hurt to advertise additional possibilities.
But as a requirement it makes little sense.
And I'll note that when I was a CS undergrad, the most popular humanities elective in my cohort was the "Science Fiction Literature" course offered by English, so many of the students were reading this sort of thing anyway.
1There was a required ethics course in the CS syllabus when I got my BS in the subject, and I think the ACM still recommends one.
Biting the hand that feeds IT © 1998–2021