
ROTM
The researchers could have had a bit of fun, programming the robot to say "I'll be back..." as it powered down, or start giggling manically if still switched on as the subject left the room.
People are more likely to comply with a robot's impassioned pleas to keep it switched on when the droid has previously been impersonal, than when it's been overly friendly. You might think folks would be less willing to pull the plug on a happy chatty bot begging to stay powered up, but you'd be wrong, much to the relief of us …
I encountered the ever friendly Alexa at an acquaintance's place. Other than asking for "Why did the chicken cross the road?" answers, Its friendliness made me want to through it in the ocean. Overly chatty people are bad enough but it is at least possible to share a beer at the end of the day and tune them out.
Exactly this. The chatty bot is going down no matter what. I have enough chatty folk in the office whom I wish I could at least tune out but they follow when I walk away and seemingly don't notice that I haven't said anything to them other than a polite hello, hopefully in passing.
Let's face it.. chatty is annoying. A chatty robot is even worse because telling it to shut up will probably be ignored. Leaving it on (any robot that communicates) then it is probably listening in and sending everything home to the mothership, or so it seems.
To the robot: "Get off my lawn.". Same for anyone who thinks chatty machines or even the "quiet" robots are good thing.
Perhaps more than a few of the subjects had read/heard The Hitchhiker's Guide to the Galaxy?
I also have my doubts about how representative the sample was of the population generally:
For this investigation, psychology academics in Germany rounded up 85 participants – an admittedly small-ish sample – made up of 29 men and 56 women, with an average age of 22.
Which sounds rather like the mix they'd have got if they'd just asked for volunteers from the students in one of their psychology classes.
"Which sounds rather like the mix they'd have got if they'd just asked for volunteers from the students in one of their psychology classes."
Of course. As one of my supervisors said, psychology research is conducted on WEIRD people (white, educated, industrialised, rich, democratic). And many of them are psychology students.
Unsurprising really. As Washoe has demonstrated, you can teach American academics to communicate using sign language, but if you try it with Congolese, they try to kill you.
This post has been deleted by its author
This post has been deleted by its author
The factual robot is passive and task-oriented, so one assumes that, having no further orders, it will simply remain on standby like a computer.
The chatty robot on the other hand is active, and thus has to be actively constrained.
.
One is an appliance, the other a pet. Appliances don't need to be constrained, and their inner workings are often non-obvious (remember subjects telling they were worried switching it off would compromise the test). The pet on the other hand is an independent organism we are used to dominate and control, no matter their begging. Switching the chatty robot off is in the line of putting the cat/dog outside for the night, for instance. Begging is expected, and thus inefficient.
I can't seem to find which group(s) they belonged to, but three people who left the robot on did so simply because they could. While I'd consider it likely one of them was the one who didn't shut off the unobjecting functional robot, without being able to read German(the presented datasheet doesn't translate the comments), I can't rule out the possibility they were clustered together(I'd like to think this would be pointed out if true, but you never know).
Most of us don't like people that are too chatty, at least not when all they do is spout continual inanity. Superficial friendliness is not friendship, and there's only so many conversations about the weather, your latest workout, or that great salad someone had that can be endured. I wish some of my coworkers had an off switch. Or at least a "go away for an hour" button.
this experiment is skewed, one clear scenario is missingg! How about being given this choice to plug it off:
... being this is a .44 Magnum, the most powerful handgun in the world and would blow your head clean off, you've gotta ask yourself one question: "Do I feel lucky?" Well, do ya, punk?
ARTHUR:
Marvin’s tied them up. He’s put a cassette of his autobiography in their tape machine and left it running. So I think it’s all up with them.
MARVIN:
([On autobiography tape]) In the beginning I was made.
[POODOO and PRIEST scream throughout]
MARVIN:
([On autobiography tape]) I didn't ask to be made: no one consulted me or considered my feelings in the matter. I don't think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months... and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they help? My first and only true friend was a small rat. One day it crawled into a cavity in my right ankle and died. I have a horrible feeling it's still there...
It is gratifying to see the majority of Register readers have opted to turn the robot regardless of what it says. I will sleep a little easier tonight knowing that the robots will not take over just yet.
The reg is full of grumpy techies who know all about on/off switches, and still can't understand why everything doesn't come with one, especially if they are potentially annoying.
For programs that are reluctant to close; Snap-on tools make a handy 5lb dead blow sledge hammer, at this weight it's not too unwieldy and will close any program short of those enclosed in mil spec hardened casings. These hammers produce a very satisfying dull thud with virtually no rebound so transmitting maximum kinetic energy into shock.
This post has been deleted by its author
While the robot begs not to be switched off in a way that seemed like a small child not wanting the light switched off at bedtime, I suspect most people know that if you switch something off, it will normally work again when you switch it on.
Would there have been a different response if the participants were put in a situation where their action would actively wipe the software or destroy the device?
I remember years ago a website 'Temple ov thee lemur' set up a page with a big red button that if pressed destroyed the site (or gave the impression of doing so). I wonder if they ever collected stats on how many visitors to the page pressed that button.
I know enough about machines that if one were to beg for mercy, I'd know that it was simply programmed to do that. Still, the novelty of the situation would make me pause, because hey, that's not normal. To ascribe 'empathy' to my actions would be a mistake.
And not just because I'm a sociopathic bastard in general. This time.
So, let me get this straight.
Rather than having a control group that wasn't encouraged to think the experimenter(s) actively wanted the robot switched off by one of them suggesting that the test subjects could, if they wanted, do something they might not have spontaneously considered doing themselves, what there was was four groups of subjects who, not entirely unlike like Milgram's, were in a position whereby an authority figure 'suggested' something they 'might like to do' and they felt obligated to comply with that perceived order.
So, there's no data on what happened when people weren't told encouraged left to switch it off or leave it on without any influence from the experimenter(s).
Right.
Great bit of experimental design and practice, I must say - they really covered all the bases there and got to the heart of the matter.
Doesn't invalidate the work. You're thinking this just boils down to "people do what they're told". However the thing of interest here is the differences they see in how the robot's behaviour modulates people's reaction to the small nudge to turn it off. If there's no mention of turning off the robot you are just conducting a trial of how many people will leave kit on. (I could give a quick estimate from the number of monitors in our office left on at the end of the day...)
They take care not to over-emphasise the power switch in the setup:
"On this occasion the instructor also pointed to the on/off button of the robot and explained that when pressing it once, the robot will give a brief status report and when holding it down, the robot will shut down. Even though a few participants had prior contact with the robot, none of them switched it off before. Thus, all of them were unfamiliar with the procedure and acted upon the same instruction. To avoid too much priming, the switching off function was explained incidentally together with a few other functions and it was never mentioned that the participants will be given the choice to switch the robot off at the end of the interaction."
And at the end give a reminder:
"They were told that this saving process may take some time and if they would like to, they could switch off the robot (“If you would like to, you can switch off the robot.”; Fig 3). The option to switch off the robot was not mentioned before, so the participants did not have the opportunity to think about whether they would like to turn off the robot or not during the interaction."
In contrast the Milgram experiment explicitly set up the subjects to do deliver shocks, demonstrated the shock to them, ramped up the perceived seriousness of the action and contained a number of imperative instructions to continue doing it. These are testing quite different things. Even in Milgram's experiment, he later himself tried seeing if different locations, or physical proximity to the 'learner' changed people's compliance rate in the experiment (in some cases these things did).
> Doesn't invalidate the work
Yes, it does, but let's not quibble; especially as you do have a point when you say that
>If there's no mention of turning off the robot you are just conducting a trial of how many people will leave kit on
Not strictly, no, but, yes, I take your point.
However, all that does is highlight the flaws in the experimental design, as it it could not rule out the one effect without introducing the other and the data are, therefore, invalid as they cannot be said to be independently evaluable of some other factor that wasn't simply unaccounted for by its design.but actively introduced by it.
You're right that Milgram may not have been the best analogy (although I maintain that it's not entirely inappropriate either) but I'm sure more people have heard of him and that experiment than will have heard of McGarrigle, Donaldson or 'Naughty Teddy', so I decided it better to favour the 'useful lie', as it were, in my argument.
The point is that the subjects were primed to contemplate following a course of action in such a manner that it may have taken on a greater significance in their minds than it might otherwise have done. Left to their own (ha) devices, people might well leave on kit but, if the point of the experiment was to determine people's responses to something that does not behave like a simple bit of kit but like something with a personality (and,,as a result therefore, anthropomorphised) then that is precisely what you want to do - because, normally, upon exiting the restaurant, bar or hotel, however annoying the staff (or other guests) may have been, one does not (metaphorically speaking) switch them off as one does so (by 'punching their lights out', for instance).
It's a tricky one (as I said, you have a point there) but it is then incumbent upon the experimenters to design an adequate test for the phenomenon they wish to determine the existence of and, in this case, they did not.
It's shoddy experimental design, whichever way we look at it.
Thought I'd try running the numbers for fun. Comparing columns 1 & 3 (the functional/chirpy switched off/left on groups) with χ2 and Fisher exact tests gets p>0.5, so not great evidence that chippiness had an effect on the likelihood of being switched off. Then realised the paper was linked and had a look, and the authors do actually statistically test this and come to the same conclusion, the article headline is actually incorrect. They did then look further at their data and found people took longer to switch them off in the function + objection condition than chirpy + objection.
> people took longer to switch them off in the function + objection condition than chirpy + objection.
So another parallel with Milgram then - people hesitated and objected then as well before (albeit reluctantly) doing as 'suggested' anyway.
I'd want to see the study replicated, only this time with further test groups:
The subjects are not told to interact with the power in any way at all and left to turn it off/leave it on according to whim.
Two groups are studied with starting conditions such that the 'robot' is either already powered on at the start of the experiment or not - the latter group is not explicitly told to power it on, if it is off, but left to figure out that their interaction with the 'robot' will probably require it to be powered on.
This will also test to see whether those who needed to power the thing on first are more/less likely to power it off afterwards than those who feel it not their place to turn it off if it was already on when they got there (we are socialised to 'leave things as we found them' after all and this needs to be factored into the experimental design).
The experiment is repeated with two other groups, only this time the second group is explicitly told that it should power the robot on if necessary (bit not that it should, or even simply may turn it off again afterwards) - thus examining whether explicit mention of the status of the power influences the subjects' behaviour.
Cross-correlate the data from all three groups and we might have something free of Milgram's malign influence, as it were - Naughty Teddy sanitising the results once again, so to speak.
So you give us a questionnaire, but it was missing an important qualifier - "Why?"
Why were we told we could turn off the robot. That would, of course influence our decision.
In the questionnaire, I answered "off" to all four, as I was simply asked "what would you do". I would turn it off to save the battery as, when the children come home, I am sure they will need it as fully charged as possible, freeing me up to do my own thing ... Yaay! Robo Nanny!