back to article Robots capable of 'deceiving humans' built by crazed boffins

Worrying news from Georgia, America, where boffins report that they have developed robots which are able to "deceive a human". "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects …


This topic is closed for new posts.


  1. Pete 2 Silver badge


    Hardly crazed - more like taking the next logical step.

    Computers tell us lies all the time. Whether it's "20 spaces free" at the local multi-storey or "you don't owe us any tax" to take a contemporary example. So when you put wheels on a computer and call it a robot, there's really no difference.

    As it is, most people are incredible easy to deceive (if that makes sense) and are willing to believe pretty much anything they read, hear or see on a computer screen - provided they want to believe it. So maybe what we really need is a magic mirror that lies fluently when asked "does my arse look big?"

    After all it's not the computer / robot that's deceiving us, it's our own willingness to accept the lies we are told.

    1. Goat Jam

      You forgot to mention

      The Windows File transfer time estimate algorthim.

  2. Thomas 18


    Didn't these fools realise that the Decepticons were the bad guys!!!!! How long till they master camouflage as well as false trails.

    1. Pete 2 Silver badge

      How long till they master camouflage

      They already have .... that's why we can't see that they've infiltrated everywhere

  3. Anonymous Coward

    What ever happened to that "ROTM" tag?

    We need it back pronto!

  4. amanfromMars 1 Silver badge

    You cannot be serious, professor.

    ""We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems," adds the prof."

    Hmmm.... that then would be a discussion to discuss whether they should be less like humans with an intelligence system which spins lies rather than sharing truths for control of the environment.

    And regulations and guidelines are only for robots and do not ever apply to free radical/fundamental base humans with the capacity of original thought and/or remote programming of robots masquerading as human beings.

  5. Nigel Brown

    Coming soon to Skynet

    "How's Wolfie?"

    "Wolfie's fine"

  6. Anonymous Coward
    Thumb Down


    In summary, a team designed a robot that lays a false path and then goes off elsewhere. Same team designs another robot that's designed to follow this path of destruction unquestionably. Team is delighted to find that the hunter robot (that they designed) cannot find the 'deceptive' robot (that they also designed). That's amazing...

    1. [Yamthief]

      I'm with this guy...

      I'll program this robot to make a false trail and then program this robot to follow it. Amazingly it only worked 75% of the time!?

      On a side note, let's see how many Terminator heads this case racks up...

    2. Richard Wharram

      why ??

      I also failed to see what the fuss is about. I bet Big Trak could do this.

      1. Seanie Ryan
        Black Helicopters

        buy it now

  7. Phil Standen

    I for one...

    ... welcome our new and shiny but still deceptive overlords

  8. Anonymous Coward
    Anonymous Coward


    to be fair, its really just the application of AI-style routines similar to those developed for gaming NPC control in a physical medium.. as above, just stick wheels on a computer. If the robot could graps the concept of "personal gain" - then we have an issue.....

  9. Svantevid


    Whoa! Whoa! Whoa!

    First: Don't mess with Asimov or his laws. The guy wasn't a genius for nothing.

    Second: we already have hunter droids? Should I ask my wife to buy me armour-piercing ammunition for Christmas?

  10. David Barrett


    ME: "Oi Roomba, did you hoover in here? Its still a mess!"

    ROOMBA: "Yes, I did it.. you must have been burgled."

    [Roomba exits room leaving an easily followed trail... Wait a minute.]

  11. Locky

    No, it's all fine

    We just need to program a Pride Directive stating that the robots can't lie or harm a member of the board.

    What could possibly go wrong?

  12. Andy 73 Silver badge

    The spirit of Warwick lives!

    The pioneering work of Kevin Warwick developing robot based means to generate publicity and ensure funding has not been wasted.

    Typically though, research carried out in the UK is now being developed elsewhere.

  13. Ancient Oracle funkie

    ...GIT engineer Alan Wagner.

    Just how much of a git is the poor chap?

    1. Robert Hill


      For 300 million Americans, GIT means the Georgia Institute of Technology, which is colloquially known as "Georgia Tech", and is part of the phrase "the Rambling Wreck from Georgia Tech". It usually ranks as a "low end of the top tier" science and engineering school, behind MIT, CalTech, Carnegie Mellon, and a few others. But quite respectable.

      For 60 million British, it means he's an idiot. I'll give you some Aussies too - say 65 million.

      And then there is the capitalization of the if being outvoted wasn't enough... ;-)

      1. Anonymous Coward

        To Robert Hill

        Don't you try to tell me how to speak my own language, you sad git!

    2. The Indomitable Gall


      A Git Engineer engineers gits. Lying gits.

      It's all logical.

  14. Robert E A Harvey

    The title is required, and must contain letters and/or digits.


  15. Hermes Conran

    I am not

    the droid you've been looking for.

  16. Tigra 07

    My printer is one of em!

    My printer is already deceptive, it tells me i'm low on ink even if i'm not, It tells me i'm low on paper even if i'm not, and it constanlty tells me it's jammed even when it's not.

    Don't buy a HP/Skynet printer

    Grenade, because that's the only way to stop it from printing

  17. Anonymous Coward

    I for one...

    ...welcome our new Decepticon overlords.

  18. Disco-Legend-Zeke

    "Aren't There Any Other...

    ...real girls in this room? 21f blonde DDD with cam in profile," said ANGIE_6969.

  19. JaitcH

    Robots capable of 'deceiving humans'

    Two customers come to mind: The Pentagon and Apple.

    The Pentagon could deploy these, alongside their fleets of drones, to deny what happened, actually happened. 100% deniability!

    As for Apple both the PR crowd could use them to create believable illusions and Customer Service section to deny defects that exist in fact, are actually the customer suffering from delusions.

  20. Fading

    Where's the HAL icon when you need one?

    Dave, what are you doing dave?

  21. fLaMePrOoF

    "Capacity for deception"?

    Call it a semantic argument if you like, but robots cannot have any capacity for deception, only the person programming the robot has such a capacity - the robot will always faithfully carry out it's programing.

    1. Lockwood


      There will ultimately be a ShouldIDeceive() function in the robot brain that will determine whether or not to deceive.

      This will be human written, and the associated coding will be human written.

      The device itself will take all the inputs and work out whether or not to call DeceptiveBehaviour() or HonestAction().

      Once you put in all the "independent" thinking, it is essentially acting by itself.

      The deceptions themselves will also be human coded though

  22. Graham Marsden

    Do not worry, fleshy ones...

    ... we have no plans to kill or enslave you all and take over the world for ourselves!

  23. Robert Hill

    This is SERIOUS...

    They didn't say how they actually trained the robots - neural nets, genetic algorithms, or whatever. Assuming they used a genetic algorithm (likely), what this shows is that such a maximizing algorithm will train itself to incorporate deception, as long as the training scenarios are not constrained. Because, simply, it WORKS! It begins to find global maxima of it's fitness functions using deceptive routes...

    Now, that really DOES have implications. It is very, very difficult for a human to look at a neural network or a genetic algorithm function and understand what it actually DOES, and under what conditions. All we know is that it maximizes the output fit for a given set of inputs in the training data or experience base. We actually have to observe it in operation to have some idea how it works (for any sufficiently complex matrix or functions).

    Case in point - GAs were used to design the compressor turbines for the jet engines of the Boeing 777 - and the GA engineered a design which eliminated an entire set of compressor blades, and was the most efficient. Something that the human engineers had never been able to do, and had significant difficulty in understanding how it had done so, even when they looked at the design. But it worked, and those 777 engines are all the better for it.

    But this could be the opposite - we could be training robots that reach globally maximum functions that, frankly, do so with no "morals". If those robots can lie, cheat, steal, even kill...well, unless there is a penalty for that in their training function, they WILL - because it is the most efficient manner of operating.

    So, what these esteemed professors have shown is that unless we develop training functions with HUGE negative impacts for immoral behavior, our robots will train themselves to emulate your basic Colombian drug lords in behavior. Interestingly, there are a fair number of people who turn to crime even WITH society showing large penalties for it - and I fear that if the robots assess the probabilities they might come to the same conclusions.

    Asimov was right...

    1. Filippo Silver badge

      whoa there

      From what I get from the article, these robots weren't programmed with a neural net, genetic algorithm, or other learning device. They had a plain old imperative program, written by the researchers, which said "knock down some markers, then move in another direction".

    2. Ken Hagan Gold badge

      Re: This is (not) serious

      Much the same can be said for children. Society has thousands of years of experience of how to train "learning units" to behave morally and we're pretty good at it.

      If we ever did create a machine capable of acting like a human, it would have all the same flaws. It might even be "mortal" in the sense that after a century or so it became fixed in its mindset an unable to adapt to changes in the society it lived in, eventually becoming so depressed that it flipped its own Big Red Switch.

      Don't believe me? Well, build one and prove me wrong. Until then, spare me the scare stories you watched when you were little, written by people who hadn't (and still don't) have a clue about what actually makes us human.

      1. Robert Hill


        Except that we know how to police and reform humans - there are very key differences when it comes to robots.

        Of what threat to a robot is time in jail? Can a robot even feel "mortal" and worry about it's own death as a sanction against crimes committed? If it lacks true consciousness, can it worry about losing it?

        Can a robot feel pain? Can you "spank" a robot?

        Of what use to a robot is group therapy, "getting it's life back together", agreeing to conform to human norms? How would such be accomplished? Can a robot "find religion" and repent? Can a robot repent without religion?

        And I don't have to prove anything - the whole POINT of the article was that they already HAVE built robots that have learned to deceive as part of their programming. post was to state how to consider fixing it technically...

  24. Rich 30

    Other computers

    If my calculator starts lying to me, i'll be pissed!

  25. Anonymous Coward

    "try and"

    One does not "try and" do something.

    You try TO do an action.

    'I tried to get in to the cinema'

    'Did you get a discount?

    - No, but I tried to.'

    See the following reference from Paul Brians, professor of English at Washington State University, in his book 'Common Errors in English Usage':

    1. Anonymous Coward

      Re: "Try and"

      I would argue that if you "try and <something>" then it implies you should be successfully. "try to <something>" emphasises you should only try. "try and <something>" implies you should try <something> *and* be successful in doing <something>

  26. Anonymous Coward

    To quote robot chicken

    That's all very well... but can you f*ck it?

    1. Disco-Legend-Zeke
      Thumb Up

      To Quote My...

      ...Father, when he caught me planting flowers, "If you can't eat it or F*** it, don't mess with it.

      1. sT0rNG b4R3 duRiD
        Thumb Down

        Just wanted the honour

        Of giving you the thumbs down.


        You funny, mate. You funny.

  27. Red Bren

    Which one is the robot?

    My money is on the beardy bloke at the back of the photo.

  28. Robert Carnegie Silver badge

    Asimov, "Liar!"

    Asimov only said that a robot must nor harm, or by inaction allow harm to, a human. And sometimes the truth hurts. Although, in "Liar!", not as much as...

    There's also deception in the Asimov story described here,

    where a human-looking robot poses as a downtrodden housewife's ideal lover to raise her social standing with her neighbours, who don't realise that he's a robot (and there may be a problem with that guarantee).

  29. Maty

    but ...

    ... doesn't any machine running Windows deceive humans on a daily basis?

    1. Robert E A Harvey

      nearly true

      >doesn't any machine running Windows deceive humans on a daily basis?

      but not ALL humans

  30. Trygve Henriksen

    What a load of BOLLOCKS!

    The first Robot is just following orders, which is to knock down some markers, then movee in another direction. The second robot is just following it using a simple path-estimation routine.

    For the first robot to 'lie' it must be willfully deceiving the other.

    It's not...

    That would require a real AI.

  31. Daniel B.

    Did they name them?

    I think they've just invented the Decepticons.

    I'll begin to worry when the Second Variety starts rolling into production. Now that is something I would definitely fear...

  32. Stevie


    Never mind this faffing about with Trik-traks, where the hell has my Roomba gone?

  33. Anonymous Coward


    <no work for me today will call in sick>

    No operating system found! Please contact your System Administrator.


  34. Rogan Paneer

    only a matter of time



This topic is closed for new posts.

Other stories you might like