back to article A future of AI-generated fake news photos, hands off machine-learning boffins – and more

Good morning, or afternoon, wherever you are. Here's a roundup of recent AI developments on top of everything else we've reported over the past week or so. Real or fake? Researchers at Nvidia have developed and described a new way to train generative adversarial networks (GANs) in a more stable manner to generate a series of, …

  1. MonkeyCee

    Current laws apply

    For ML and related disciplines, it's not the new laws you need to worry about, it's the existing ones.

    If your decision system is a black box, you really should do some thorough testing to ensure it's not going to make a decision that would be illegal if made by a human. Also testing to ensure the system can't be gamed or manipulated would be good to.

    But many people don't like doing that testing because it shows that the magic black box doesn't in fact work very well. Or more exactly, the training data was insufficient. Hence why you end up with facial recognition programs that can't tell the difference between a gorilla and a black person, because the data set used only contained images of white people.

    Even more fun is anti-discrimination laws. You are not allowed to discriminate based on various protected categories* even if there is a factual basis for the conclusion. You are also not allowed to infer someones protected categories from other information. So you can't offer a woman cheaper car insurance because they female, and females have a lower rate of accidents. You also can't "ignore" the gender, but then use the fact that they have had a name change when getting married to conclude they are likely to be a woman, thus cheaper to insure.

    You can, if you're clever, find other ways around this**. But you have to ensure that your application is in fact using that information, and not the other. So any black box system that is relied upon in any way is asking for trouble. Worse still, that black box might well be making correct factual conclusions that you are not allowed to use, but you are not aware of.

    * in general anything the nazi's would put you in a camp for. Race, religion, gender, age, membership of a political organisation etc.

    ** for car insurance, you can look at what type of car they are insuring, individual accident rate, income, annual mileage and driver monitoring, which together usually manage to make a more accurate prediction than the simple gender+age model

    1. Anonymous Coward
      Anonymous Coward

      Re: Current laws apply

      Kind of. All you have to do to "get away with it", is make it so complicated, no one can explain, understand or condemn you for it (like the old Ponzi schemes?). Then let the $$$ roll in.

  2. Arthur the cat Silver badge


    For those of us who aren't fully paid up members of the Church of The Blessed Steve, what does searching for "brassiere" on an iOS 11 device do?

  3. Anonymous Coward
    Anonymous Coward

    Fake picture

    The picture used in this article is clearly FAKE!!!1! Just look at the size of his hands.

  4. scarper

    Next, vid

    Next, fake videos of politicians saying dodgy things. As predicted by the late, great science fiction author John Brunner in, um, 1969. Little did he know we'd get politicians who tweets are accepted as court evidence of a law's illegal intent.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022