back to article Robots capable of 'deceiving humans' built by crazed boffins

Worrying news from Georgia, America, where boffins report that they have developed robots which are able to "deceive a human". "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects …


This topic is closed for new posts.
  1. Pete 2


    Hardly crazed - more like taking the next logical step.

    Computers tell us lies all the time. Whether it's "20 spaces free" at the local multi-storey or "you don't owe us any tax" to take a contemporary example. So when you put wheels on a computer and call it a robot, there's really no difference.

    As it is, most people are incredible easy to deceive (if that makes sense) and are willing to believe pretty much anything they read, hear or see on a computer screen - provided they want to believe it. So maybe what we really need is a magic mirror that lies fluently when asked "does my arse look big?"

    After all it's not the computer / robot that's deceiving us, it's our own willingness to accept the lies we are told.

    1. Goat Jam

      You forgot to mention

      The Windows File transfer time estimate algorthim.

  2. Thomas 18


    Didn't these fools realise that the Decepticons were the bad guys!!!!! How long till they master camouflage as well as false trails.

    1. Pete 2

      How long till they master camouflage

      They already have .... that's why we can't see that they've infiltrated everywhere

  3. Anonymous Coward

    What ever happened to that "ROTM" tag?

    We need it back pronto!

  4. amanfromMars 1 Silver badge

    You cannot be serious, professor.

    ""We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems," adds the prof."

    Hmmm.... that then would be a discussion to discuss whether they should be less like humans with an intelligence system which spins lies rather than sharing truths for control of the environment.

    And regulations and guidelines are only for robots and do not ever apply to free radical/fundamental base humans with the capacity of original thought and/or remote programming of robots masquerading as human beings.

  5. Nigel Brown

    Coming soon to Skynet

    "How's Wolfie?"

    "Wolfie's fine"

  6. Anonymous Coward
    Thumb Down


    In summary, a team designed a robot that lays a false path and then goes off elsewhere. Same team designs another robot that's designed to follow this path of destruction unquestionably. Team is delighted to find that the hunter robot (that they designed) cannot find the 'deceptive' robot (that they also designed). That's amazing...

    1. [Yamthief]

      I'm with this guy...

      I'll program this robot to make a false trail and then program this robot to follow it. Amazingly it only worked 75% of the time!?

      On a side note, let's see how many Terminator heads this case racks up...

    2. Richard Wharram

      why ??

      I also failed to see what the fuss is about. I bet Big Trak could do this.

      1. Seanie Ryan
        Black Helicopters

        buy it now

  7. Phil Standen

    I for one...

    ... welcome our new and shiny but still deceptive overlords

  8. Anonymous Coward
    Anonymous Coward


    to be fair, its really just the application of AI-style routines similar to those developed for gaming NPC control in a physical medium.. as above, just stick wheels on a computer. If the robot could graps the concept of "personal gain" - then we have an issue.....

  9. Svantevid


    Whoa! Whoa! Whoa!

    First: Don't mess with Asimov or his laws. The guy wasn't a genius for nothing.

    Second: we already have hunter droids? Should I ask my wife to buy me armour-piercing ammunition for Christmas?

  10. David Barrett Silver badge


    ME: "Oi Roomba, did you hoover in here? Its still a mess!"

    ROOMBA: "Yes, I did it.. you must have been burgled."

    [Roomba exits room leaving an easily followed trail... Wait a minute.]

  11. Locky

    No, it's all fine

    We just need to program a Pride Directive stating that the robots can't lie or harm a member of the board.

    What could possibly go wrong?

  12. Andy 73

    The spirit of Warwick lives!

    The pioneering work of Kevin Warwick developing robot based means to generate publicity and ensure funding has not been wasted.

    Typically though, research carried out in the UK is now being developed elsewhere.

  13. Ancient Oracle funkie

    ...GIT engineer Alan Wagner.

    Just how much of a git is the poor chap?

    1. Robert Hill


      For 300 million Americans, GIT means the Georgia Institute of Technology, which is colloquially known as "Georgia Tech", and is part of the phrase "the Rambling Wreck from Georgia Tech". It usually ranks as a "low end of the top tier" science and engineering school, behind MIT, CalTech, Carnegie Mellon, and a few others. But quite respectable.

      For 60 million British, it means he's an idiot. I'll give you some Aussies too - say 65 million.

      And then there is the capitalization of the if being outvoted wasn't enough... ;-)

      1. Anonymous Coward

        To Robert Hill

        Don't you try to tell me how to speak my own language, you sad git!

    2. The Indomitable Gall


      A Git Engineer engineers gits. Lying gits.

      It's all logical.

  14. Robert E A Harvey

    The title is required, and must contain letters and/or digits.


  15. Hermes Conran

    I am not

    the droid you've been looking for.

  16. Tigra 07 Silver badge

    My printer is one of em!

    My printer is already deceptive, it tells me i'm low on ink even if i'm not, It tells me i'm low on paper even if i'm not, and it constanlty tells me it's jammed even when it's not.

    Don't buy a HP/Skynet printer

    Grenade, because that's the only way to stop it from printing

  17. Anonymous Coward

    I for one...

    ...welcome our new Decepticon overlords.

  18. Disco-Legend-Zeke

    "Aren't There Any Other...

    ...real girls in this room? 21f blonde DDD with cam in profile," said ANGIE_6969.

  19. JaitcH

    Robots capable of 'deceiving humans'

    Two customers come to mind: The Pentagon and Apple.

    The Pentagon could deploy these, alongside their fleets of drones, to deny what happened, actually happened. 100% deniability!

    As for Apple both the PR crowd could use them to create believable illusions and Customer Service section to deny defects that exist in fact, are actually the customer suffering from delusions.

  20. Fading Silver badge

    Where's the HAL icon when you need one?

    Dave, what are you doing dave?

  21. fLaMePrOoF

    "Capacity for deception"?

    Call it a semantic argument if you like, but robots cannot have any capacity for deception, only the person programming the robot has such a capacity - the robot will always faithfully carry out it's programing.

    1. Lockwood


      There will ultimately be a ShouldIDeceive() function in the robot brain that will determine whether or not to deceive.

      This will be human written, and the associated coding will be human written.

      The device itself will take all the inputs and work out whether or not to call DeceptiveBehaviour() or HonestAction().

      Once you put in all the "independent" thinking, it is essentially acting by itself.

      The deceptions themselves will also be human coded though

  22. Graham Marsden

    Do not worry, fleshy ones...

    ... we have no plans to kill or enslave you all and take over the world for ourselves!

  23. Robert Hill

    This is SERIOUS...

    They didn't say how they actually trained the robots - neural nets, genetic algorithms, or whatever. Assuming they used a genetic algorithm (likely), what this shows is that such a maximizing algorithm will train itself to incorporate deception, as long as the training scenarios are not constrained. Because, simply, it WORKS! It begins to find global maxima of it's fitness functions using deceptive routes...

    Now, that really DOES have implications. It is very, very difficult for a human to look at a neural network or a genetic algorithm function and understand what it actually DOES, and under what conditions. All we know is that it maximizes the output fit for a given set of inputs in the training data or experience base. We actually have to observe it in operation to have some idea how it works (for any sufficiently complex matrix or functions).

    Case in point - GAs were used to design the compressor turbines for the jet engines of the Boeing 777 - and the GA engineered a design which eliminated an entire set of compressor blades, and was the most efficient. Something that the human engineers had never been able to do, and had significant difficulty in understanding how it had done so, even when they looked at the design. But it worked, and those 777 engines are all the better for it.

    But this could be the opposite - we could be training robots that reach globally maximum functions that, frankly, do so with no "morals". If those robots can lie, cheat, steal, even kill...well, unless there is a penalty for that in their training function, they WILL - because it is the most efficient manner of operating.

    So, what these esteemed professors have shown is that unless we develop training functions with HUGE negative impacts for immoral behavior, our robots will train themselves to emulate your basic Colombian drug lords in behavior. Interestingly, there are a fair number of people who turn to crime even WITH society showing large penalties for it - and I fear that if the robots assess the probabilities they might come to the same conclusions.

    Asimov was right...

    1. Filippo

      whoa there

      From what I get from the article, these robots weren't programmed with a neural net, genetic algorithm, or other learning device. They had a plain old imperative program, written by the researchers, which said "knock down some markers, then move in another direction".

    2. Ken Hagan Gold badge

      Re: This is (not) serious

      Much the same can be said for children. Society has thousands of years of experience of how to train "learning units" to behave morally and we're pretty good at it.

      If we ever did create a machine capable of acting like a human, it would have all the same flaws. It might even be "mortal" in the sense that after a century or so it became fixed in its mindset an unable to adapt to changes in the society it lived in, eventually becoming so depressed that it flipped its own Big Red Switch.

      Don't believe me? Well, build one and prove me wrong. Until then, spare me the scare stories you watched when you were little, written by people who hadn't (and still don't) have a clue about what actually makes us human.

      1. Robert Hill


        Except that we know how to police and reform humans - there are very key differences when it comes to robots.

        Of what threat to a robot is time in jail? Can a robot even feel "mortal" and worry about it's own death as a sanction against crimes committed? If it lacks true consciousness, can it worry about losing it?

        Can a robot feel pain? Can you "spank" a robot?

        Of what use to a robot is group therapy, "getting it's life back together", agreeing to conform to human norms? How would such be accomplished? Can a robot "find religion" and repent? Can a robot repent without religion?

        And I don't have to prove anything - the whole POINT of the article was that they already HAVE built robots that have learned to deceive as part of their programming. post was to state how to consider fixing it technically...

  24. Rich 30

    Other computers

    If my calculator starts lying to me, i'll be pissed!

  25. Anonymous Coward

    "try and"

    One does not "try and" do something.

    You try TO do an action.

    'I tried to get in to the cinema'

    'Did you get a discount?

    - No, but I tried to.'

    See the following reference from Paul Brians, professor of English at Washington State University, in his book 'Common Errors in English Usage':

    1. Anonymous Coward

      Re: "Try and"

      I would argue that if you "try and <something>" then it implies you should be successfully. "try to <something>" emphasises you should only try. "try and <something>" implies you should try <something> *and* be successful in doing <something>

  26. Anonymous Coward

    To quote robot chicken

    That's all very well... but can you f*ck it?

    1. Disco-Legend-Zeke
      Thumb Up

      To Quote My...

      ...Father, when he caught me planting flowers, "If you can't eat it or F*** it, don't mess with it.

      1. sT0rNG b4R3 duRiD
        Thumb Down

        Just wanted the honour

        Of giving you the thumbs down.


        You funny, mate. You funny.

  27. Red Bren

    Which one is the robot?

    My money is on the beardy bloke at the back of the photo.

  28. Robert Carnegie Silver badge

    Asimov, "Liar!"

    Asimov only said that a robot must nor harm, or by inaction allow harm to, a human. And sometimes the truth hurts. Although, in "Liar!", not as much as...

    There's also deception in the Asimov story described here,

    where a human-looking robot poses as a downtrodden housewife's ideal lover to raise her social standing with her neighbours, who don't realise that he's a robot (and there may be a problem with that guarantee).

  29. Maty

    but ...

    ... doesn't any machine running Windows deceive humans on a daily basis?

    1. Robert E A Harvey

      nearly true

      >doesn't any machine running Windows deceive humans on a daily basis?

      but not ALL humans

  30. Trygve Henriksen

    What a load of BOLLOCKS!

    The first Robot is just following orders, which is to knock down some markers, then movee in another direction. The second robot is just following it using a simple path-estimation routine.

    For the first robot to 'lie' it must be willfully deceiving the other.

    It's not...

    That would require a real AI.

  31. Daniel B.

    Did they name them?

    I think they've just invented the Decepticons.

    I'll begin to worry when the Second Variety starts rolling into production. Now that is something I would definitely fear...

  32. Stevie Silver badge


    Never mind this faffing about with Trik-traks, where the hell has my Roomba gone?

  33. Anonymous Coward


    <no work for me today will call in sick>

    No operating system found! Please contact your System Administrator.


  34. Rogan Paneer

    only a matter of time


  35. ShaggyDoggy

    Re: calculators

    My calculator refused to add up - I was nonplussed

  36. chris 130

    Huzzah, can it now sell Timeshare?

    Just what we needed.


  37. pdu

    legislation for lying robots...

    Human: "so robot, i'm afraid we have some rules, you can lie when told to by an authorised human, but you must tel autorised humans the truth, ok?"

    Robot: "Yeah sure, sounds fair to me"

    Human walks away thinking "Well, that was easy"

    Robot drives away thinking "Moron".

  38. Brennan Young

    Show me a GUI which does not deceive the user.

    Philip K. Dick wrote a short story in the 1960s about a robot which could camouflage itself as a TV set, sneak into people's homes, commit murder, and then leave evidence at the crime scene to frame some innocent human being or other. I forget the title, but it's in 'The Golden Man' collection.

    I too go with the 'computers deceive regularly' meme. I subscribe to constructivism, which points out quite scientifically that the evidence of our senses is largely illusory, and any resemblance to reality - whatever that is - is rather coincidental.

    Or, to put it another way: Show me a GUI which does not deceive the user, in some important respect.

  39. Stuart Duel

    It's a basic experiment...

    ...however, more sophisticated robotics labs using far more advanced nascent AI will take this to the next step - whether it's prudent or not. You know, so preoccupied with whether they could, they don't stop to ask whether they should.

    Marry this with all the frightening research being carried out by the U.S. military - everything from super-human (genetically enhanced) soldiersl; fully autonomous and armed combat robots and drones; cybernetics - and it starts to get very scary, very quickly. These obscene things aren't just dreams of the paranoid, the U.S. military has crowed about all these areas of research and how it will "revolutionise warfare". It's bad enough having people kill people, but really steps over the line when we have machines killing people, making the decisions completely without reference to their human masters.

    This has "disaster waiting to happen" written all over it in BIG FLASHING NEON LETTERS. "Terminator" isn't a fantasy, it's very much a prophecy if we continue down this path. We're getting so close to true AI, it's only a matter of time before self-awareness comes. The last thing you want is a robot with no morals, ethics or empathy becoming aware its own existence will be at threat upon successful execution of their mission against "enemy" humans.

    "I've killed all the humans marked "enemy" now the rest of the humans want to kill me: therefore all humans are the enemy." Seriously, we don't want to go there.

    I think the U.N. should put its foot down and put a stop to this insanity, or at least some unassailable barriers, like Asimov's three rules of robotics, and place a total ban on the use of this technology for anything other than peaceful purposes with enormous sanctions for those who break international law.

    Okay, maybe something more threatening than a fluffy bunny slippered U.N. foot is needed to dissuade the U.S. from this path to universal destruction.

  40. HFoster

    Re: Asimov

    I remember reading a site called 3 Laws Unsafe ( launched by the Singularity Institute for AI around the time the "I, Robot" movie was released. Their argument was that Asimov's laws make for great fiction, but in reality would be either unethical to implement in truely intelligent machines, or end up causing more harm than good (think HAL-9000's conflicting commands causing the Discovery disaster in 2001: A Space Odyssey).

    And I really don't think this is such a great coup - all that has been demonstrated is machine-machine deception, which, as someone already pointed out, is all in the code. Machine-human deception, as someone else pointed out, depends on how much faith the human in question puts in the output of the given machine (as well as the actual instructions given the machine - i.e. no properly developed and tested ATM software would be installed if it was known to give out false balance information, but if what would we do if we woke up tomorrow to be told by the first ATM we went to that we were plus or minus €10,000 of our last known balance?).

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2020