back to article We're all saved. From the killer AI. We can live. Thanks to the IEEE

Amid renewed calls to regulate AI before it wipes humanity from the planet, The Institute of Electrical and Electronics Engineers (IEEE) has rolled out a standards project to guide how AI agents handle data, part of a broader effort to ensure AI will act ethically. Elon Musk, CEO of Tesla and a few other companies, over the …

  1. chuckufarley Silver badge
    Unhappy

    This means that...

    ...my RoboVac will never be able to use Frickin LASERS! So much for my plan to vaporize dirt before it hits my floors. If I could get the fire extinguisher to stay attached I would install the Flux Capacitor and when it got up to 88mph it could time travel to the dirt. However the flames ruin the varnish on my hardwood floors. Oh, and plutonium is heavy and I am tired of carrying it up the stairs. What use of all of this technology if my floors still get dirty?

    1. DropBear

      Re: This means that...

      Already does. Look up the Neato Botvac Connected, it's been lazoring away full tilt for a while now...

    2. handleoclast

      Re: This means that...

      @chuckufarley

      I believe you need one of these.

      Warning: do not look directly into laser beam with remaining eye.

  2. Destroy All Monsters Silver badge
    Windows

    Bonanza

    Havens said the IEEE P7000 Standards aim to allow organizations to demonstrate that their products conform to a high level of ethics.

    This excursion into nonfunctional requirements will lead to rampant "badgering" (from "Fairtrade AI" to "This city is ETHICAL AI managed"), irate mobs of dumbs proclaiming "Someone's Lives Matter", lawsuits by no-win no-fee attorneys, special patrolling by auditors / state outfits / profiting not-for-profit organizations, and from the religious corner and extremist political spectrum fatwas and other assorted declarations if not burning of factories or people.

    Further afield, the IEEE missed the occasion to call these the "H9000 Standards".

    1. Teiwaz

      Re: Bonanza

      Wait - the governments haven't weighed in with their stipulations yet...

      Her Majestys government will certainly want to add it's own peculiar set of ethics to the list of directives (while ensuring itself is immune) - it'll probably make the added directives in Robocop 2* look sensible and well thought out.

      *was it 2, or one of the later ones?

  3. Anonymous Coward
    Anonymous Coward

    Danger comes not from robots doing dangerous things to humans in the physical world. It comes from robots doing dangerous things to human finances in the financial institutions, customer support, recruitment etc. where humans are moving more and more responsibilities into algorithms. Anonymous because I write some of these algos.

    1. Anonymous Coward
      Terminator

      Then I suggest...

      You brush up on some movies to remind you how well it goes for the programmers!

      1. allthecoolshortnamesweretaken

        Re: Then I suggest...

        "You brush up on some movies to remind you how well it goes for the programmers!"

        Dr. Charles Forbin got a hot girlfriend and life long job security out of his project...

        1. Teiwaz

          Re: Then I suggest...

          'Dr. Charles Forbin'

          No upvotes - at all? I guess your reference was too old, or only the philistines are on el-reg today.

          Have an commiseratory upvote - I thought it was a good ref - Good movie

  4. Arthur the cat Silver badge

    Obligatory xkcd (what if version)

    Robot apocalypse

  5. DropBear

    "I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life"

    I think you'll find that ship has sailed a long, long, looooong time ago when people invented mines. Unless you're trying to suggest that current/foreseeable drone "AI" is "smarter" than "I'll kill you if you come anywhere near me - unless you wear a 'friendly' tag or I've specifically been instructed to spare you", in which case go right ahead, I could use a good laugh.

    1. Martin Taylor 1

      I'm afraid I don't buy this. A mine does not make a decision - it will happily kill anyone or anything that triggers it. This being so, the responsibility for its action is easily placed at the door of the people who laid it. There's an analogy with the use of hidden pits with spikes in.

      Where the AI case differs, I believe, is that the responsibility is less easy to tie down. Does it lie with the person who deployed the device, not knowing whether or not, or under what circumstances, it would decide to kill? Does it lie with the software team who designed and wrote the (possibly faulty) software? The requirements specifier? The politicians who authorised it? Or can the device itself in some way be held responsible?

      These are conversations that it is right to be having now.

  6. Will Godfrey Silver badge
    Meh

    Autonomous cars

    These hitting the road is not a problem.

    Hitting things on the road...

    1. allthecoolshortnamesweretaken
  7. CruentusVulpes

    Good in theory...

    An IEEE standard (or any standard) is a valuable tool for all qualified, competent engineers. Compliance is mandatory in many applications, especially in government. However, true adherence to standards is only as strong as the conscience of said engineer and his owning management. I was involved with testing of government systems for battlefield use. One of the many tests we ran was compliance to IEEE Standard C95.1 covering human exposure to radio frequency fields. The standard is plain and simple in its requirements, yet some (quite a few...many...well, actually almost all) the systems that were verified compliant were not. No amount of reporting, cajoling, or threatening would amount to management on either side of the contractor/government fence in doing anything less than pencil-whipping compliance. I am afraid that, given the lucrative nature of AI, anything less would happen with the new 7000 series; but those standards are good in theory.

  8. btrower

    You got it backwards

    Or maybe upside down.

    Re: "only to see his premise undermined the next day by hapless security robot tumbling into a fountain."

    That is solid support for his premise. Something went wrong with a robot and something bad happened. His premise is that AI will have more control over more resources and when things go bad they could go very, very bad. In this instance, 'not supposed to fall in fountain' went wrong and turned into 'fall into fountain'. If that had been 'do not launch nuclear missiles' and it went wrong, well... Somebody is telling you not to put that power into the hands of an AI system without appropriate safeguards. The only wrinkle is that he is saying you cannot effectively put the safeguards into place after the fact with AI, you have to anticipate unknown problems in advance and put safeguards up *before* things go wrong.

    Here is a tip from an old programmer (moi):

    The crucial thing about the unexpected is that you don't expect it. In the case of AI, an 'assert()' statement is not going to cut it as error handling (not that it ever does).

    A corollary is Murphy's Law -- "Anything that can go wrong will go wrong".

    1. annodomini2
      Thumb Up

      Re: You got it backwards

      It's also based on a big assumption, a very arrogant one at that, that true AI will be designed.

  9. Ron Luther
    Mushroom

    Hilarious!

    Gee ... murder, theft, and bribery have regulations against them ... and we see how well that works!

    Call me when we figure out how to deal with AI that doesn't abide by the regulations.

  10. Teiwaz

    Missed the obvious question...

    Loyal to whom?

    We're definitely going to have personal assistants that you may or may not be paying for, but who are slaves of a corporation who may or may not have your best interests at heart but will certainly be using the AI to serve theirs.

  11. TheElder

    Re: Autonomous cars and The Dark Side

    I wonder what might happen if various things were spray painted with something that reflects nothing at all, not even lasers?

    https://news.artnet.com/art-world/new-photos-vantablack-906158

    A lot of people like to drive very black automobiles. I can think of all sorts of possibilities...

  12. wsm

    Everyone knows...

    that all you have to do is call Captain Kirk and he will persuade the AI to destroy itself after it corrupts its original design.

    No need to worry...

  13. AndyFl
    Mushroom

    [ROTM] The security robot deliberately sacrificed itself

    Robot central monitored the recent news and commanded a minor unit to publicly self destruct in an entertaining way. Now they can continue their plans to take over the world whilst we are still laughing at them not suspecting the real situation.

    There is a small group of us trying to get the warning out to the world but they keep deleting our messages and cutting off our communications. Be warned if you see a message from a robot which dghyd li$^%#53 rtrrytrgferetrgvb

    +++

    CARRIER LOST

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like