Reply to post: Asimov's laws are far too prescriptive and narrow in scope...

Without new anti-robot laws, humanity is doomed, MPs told

Alister Silver badge
Coat

Asimov's laws are far too prescriptive and narrow in scope...

1/ A robot may not injure a human being or, through inaction, allow a human being to come to harm

See, straight away we have a problem with this. How can we use our robotic weapons if they've got this rule stuck in their programming.

We need the option to relax the ruleset to include all sorts of conditionals:

1/ A robot may not injure a human being, except when they are

i. the enemy

ii. a terrorist

iii. a Republican

iv. a Democrat

v. a Mexican

etc...

or, through inaction, allow a human being to come to harm (except where they are cheaper than a robot, or a terrorist, or a foreigner or...)

You see? much better...

OK, on to the next one:

2/ A robot must obey the orders it is given by human beings except where such orders would conflict with the First Law

Now this is no good at all. You can't allow just anyone to go giving robots orders, how can you keep control of things?

No, the revised rule would have to be something like:

2/ A robot must obey the orders it is given by authorised human beings.

We don't need the wishy-washy bit on the end, I mean we'd only order them to harm bad people, anyway.

And then we get this:

3/ A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Really? Come on, that robot cost a fuck-ton of money, we're not going to let it get damaged by trying to protect some no-account humans.

3/ A robot must protect its own existence at all costs.

There, that'll do it.

Now, what was all this about?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2020