re: claimed responsibility for a terrorist bombing
wouldn't it be easier for them to tweet "blowing Robin Hood airport sky high"
Boffins in America report that they have successfully developed a method for driving computers insane in much the same way as human brains afflicted by schizophrenia. A computer involved in their study became so unhinged that it apparently "claimed responsibility for a terrorist bombing". The research involved meddling with a …
It's emergent behaviour, from complexity.
Though hardly news, i remember hearing about nets going bonkers about 20 years ago while i was working on them at college, nothing so hi tech as telling stories, but the internal architrecture (weight values applied to the input/ouput of each neuron in particular) could be made to oscillate wildly if you tried to over train a net - applied to heavy a back error propogation factor, or buggered about with it's training data sets. (think about it, yesterday 2+2 was 4, today it's 5, tomorrow its orange - enough to give anyone the heeby jeebies)
It used to send shivers down my spine to think that a bit of code could go postal, there was also a story about them dreaming too - disconnect the inputs and let it run and all hell breaks loose!
i for onle welcome our slightly flakey Si overlords
Since, in order to model the psychotropic effects of various chemicals, you'd need to start with a complete simulation of a working human brain, right down to the atomic level. Which raises interesting, if entirely academic ethical questions of its own; either the brain isn't doing anything, which means you're not going to be able to see drug effects on the mechanism of cognition and therefore aren't learning anything you couldn't learn by modeling a much simpler network -- or you *can* see the effect on cognition, which means your model of a human brain is simulating thinking, which means you need to either start worrying about what it's thinking and experiencing, or just change your surname to Mengele and have done.
Happily, though, thanks to the enormous theoretical problems and gargantuan practical difficulties that'd need to be overcome to get us from here to there, that isn't going to be a problem for a long, long time.
We already have them for the Java Virtual Machine.
They would be A) Spring and B) Hibernate ( my opinion ).
Unless the questions were A) how can i generate a great number of NullPointerExceptions in no time? B) How can I make something as straightforward as SQL become difficult and counter-intuitive?
Give them some time to transfer their deep hate for Java programmers on to operating systems and see if you can get computers to commit suicide too.
GLaDOS is questionable due to the lack of sufficient backstory; as for the others, HAL was poorly implemented -- seriously, how many humans go mad and kill people as a result of being lied to? Chandra ought to be stood against a wall! -- and SHODAN's rampancy resulted from mistreatment at the hands of a sociopathic corporate executive with a profit motive in place of his soul.
Without having access to the full paper and therefore having to just go off the abstract, it sounds as though it's the university's marketing department that has added the BMX appeal.
Driving neural networks apparently insane is easy - poor learning algorithms or insufficient training can lead to one being convinced (with pattern matchers for example) that a picture of a warthog is of Aunty Flo.
That they drove it insane is not interesting. What _is_ interesting is lost in the sentence "But they tinkered with the automated mind in a fashion equivalent to the effects of an excessive release of dopamine in a human brain". As the abstract says, they actually tinkered with it in lots of ways, _precisely to see_ if they could find a tinker set analogous to how schizophrenics go mad. They seem to have done so, and well done them. I'll bet that some of the stories the non-human-like versions came up with were equally or even more hilarious. I look forward to their future use in developing better plots for Dan Brown.
""We have so much more control over neural networks than we could ever have over human subjects," says Grasemann."
And what, pray tell, Herr Grasemann, are human networks/societies other than just ignorant neural networks. Ignorant neural networks over which ....... well, let us say some really SMART computer programs and/or programmers have the exercise of command and control.
Are you still content to be stuck in that reporting of events rut, El Reg, rather than leading from the front with the making of events for reporting ........ with an HyperRadioProActive Programs and SMARTer ProgramMING connection?
Maybe I'm just getting jaded and cynical, but how much research funding went into this? It's just a variation of what all the kids used to do to the computers in Dixons in the 80's
10 PRINT "It was me, I did the terrorist bombing"
20 GOTO 10
RUN
One of the earliest (1940s) explicitly cybernetic ideas arose in the interchanges between Norbert Wiener and Gregory Bateson, and it went something like this "How would you design a machine which could act like a schizophrenic?"
This (what we would now call) 'reverse engineering' of insanity guided decades of research by Bateson into the nature of schizophrenic communication, leading to the 'double-bind' hypothesis, the application of Russell's theory of logical types to communication theory, and Bateson's conclusions - and demonstrations - that similar patterns drive creativity and evolution itself.
We should have had a clue from the schizophrenics themselves, who invariably have paranoid fantasies about 'machines' or technologies which control their minds (or the minds of everyone else) and make them crazy. The machines are indeed real, but they are made of flesh and blood, laws and rules rather than metal and microelectronics, although the internet has opened up the possibility of software which could generate schizophrenia in its users. (There's an app for that?) You can pick your own examples of software which 'drives you mad'.
What's missing here, then is not the banale conclusion that the machine was 'acting all crazy', but that it was making wild creative leaps, and quite literally 'thinking out of the box' which is something that has eluded AI research for decades. Good stuff.
There used to be a game called Creatures that featured cute little animals called Norns that you could train. You'd train them to play with toys which would make them happy, train them to speak to say what they were feeling, punish them to stay away from dangerous things, teach them the right foods to eat, teach them to be social and ultimately breed them. Their intial characteristics were based on genes and their learning was from a sophisticated neural net.
So of course some people set out to systematically torture them. One guy called Anti-Norn was notorious for uploading abused Norns and challenging people to rehabilitate them.They had violent mood swings, attempted self harm, ate poison, cowered in corners, tried to drown themselves and so forth. Some of them were clinically mad. And all of this happened in a mainstream game which appeared 15 years ago.
So while it's interesting to see research that shows computers go mad, it's not like it's a new phenomena, although computer isn't actually "mad" in either case, it's the software simulation on top which is.
Sort of like how a Windows machine goes crazy over time, randomly misbehaving until it eventually starts generating random events for no reason.
Normally caused by buggy software, bloated apps which have random DLLs that never get fully removed when the app is uninstalled, dodgy antivirus and all the other "unforeseen" events such as Junior unplugging it in the middle of a Windows update cycle.
AC, because this is probably why his machine throws Explorer.exe errors when transferring files...