back to article YouTubers kindly asked to mark their deepfake vids as Fake Fakey McFake Fakes

YouTube is slapping a bunch of rules on AI-generated videos in the hope of curbing: the spread of faked footage masqueraded as legit; deepfakes that make people appear to say or do things they never did; and tracks that rip off artists' copyrighted work. This red tape will be rolled out over the coming months and apply to …

  1. Anonymous Coward
    Anonymous Coward

    The opposite of what anyone should do

    People need to experience the deepfake equivalent of Newgrounds completely unmarked and begin to learn the tell tale signs. If not folks will end up as dumb as the people who thought Qanon was anything but a joke among good friends, and be unable to tell when really important content is actually being censored.

    In the heyday of online video, we didn't need to mark satire as satire and parody as parody. We just assumed critical thinking skills were common sense.

    1. Lord Elpuss Silver badge

      Re: The opposite of what anyone should do

      "In the heyday of online video..."

      Well unfortunately we're beyond that now. And while most* people might be able to/want to determine deepfakes based purely on watching them now, give it a year or two and we won't be able to as easily. So a way of reliably marking realistic fakes is essential, at least until society accepts that the phrase 'the camera never lies' is well and truly dead and buried.

      *some

      1. LybsterRoy Silver badge

        Re: The opposite of what anyone should do

        I sort of agree with you but what happens when people come to rely on the marks and someone works out how to fake the marks saying "this is freal"

      2. Filippo Silver badge

        Re: The opposite of what anyone should do

        >So a way of reliably marking realistic fakes is essential, at least until society accepts that the phrase 'the camera never lies' is well and truly dead and buried.

        I agree with that in principle. However, let's try to imagine exactly how the marking would work.

        1) The person who produces the fake marks it. This is easy, but if the point is defending against hostile fakes, then the person making them obviously won't mark them. You can ban them when you catch them, but catching them won't be trivial, so you won't be able to do it at scale, and they'll just make a new account.

        2) Have AI service providers mark everything they do. This is easy, but the hostile actor can circumvent it by running the model himself. Running your own model, even for video, is already doable and will be outright trivial by the time any sort of agreement between AI service providers is finalized.

        3) Bake the marking into the model during training. This is difficult, I'm not even sure it's possible, but let's assume it is. The hostile actor can circumvent this by using an unmarked model.

        3a) So let's assume we've banned the development of unmarked models. This is also difficult and probably legally unfeasible, but let's assume we somehow managed. If all of that holds, then we have a marking system that works. For a few years. After that, computing will be cheap enough that hostile actors can train their own model, circumventing the solution.

        4) Develop a software that spots and marks fakes (and/or hire an army of people specifically trained to look at videos and decide whether they are fake). This leaves you in a race with hostile actors, where nobody can ever achieve a complete victory. I.e. the system will always have a non-zero rate of false positives and false negatives. Worse, the hostile actors are at an advantage, because the delta between real videos and fakes is only going to grow narrower over time, not just shift around. Eventually, the fakes will be undistinguishable from real, and that will be it.

        I'm out of ideas. Any others?

        So, even in the best case scenarios, we simply can't mark them "until society adapts". The best we can do is buy society a few years' time, after which the fakes will be everywhere regardless of whether society is ready or not. Even that is assuming that a lot of things go well. It's more likely that there is nothing we can do to mark fakes in a reliable fashion.

        I would humbly suggest that it's better to focus our energies on figuring out how society can adapt to a world where anything you don't see with your own eyeballs might be fake. We'll have to do it soon anyway. Figuring out how to restore the concept of "trusted sources" could be a good first step.

  2. Winkypop Silver badge
    FAIL

    Yeah right

    The adverts on YouTube* are very often for ridiculously fake items that are definitely too good to be true.

    I think YT might want to clean their own yard first.

    * Should one be unlucky enough to see them.

    1. Lord Elpuss Silver badge

      Re: Yeah right

      Funny. Most of the YT ads I see are for Just Eats, supermarkets and government PSAs.

      1. Jamie Jones Silver badge

        Re: Yeah right

        I get 99.999% scam ads too. Maybe that says something about the videos I watch? :-)

        I get the occasional Welsh Government PSA, but never any supermarkets or food services.

      2. werdsmith Silver badge

        Re: Yeah right

        I get either Bentley Mulsanne ads or ads for a phone game where king drowns.

        Neither is of interest to me.

      3. MyffyW Silver badge

        Re: Yeah right

        The one I find most amusing is the advert for the car I already drive. Not accessories, or servicing, but encouraging me to buy another. Because you totally do that on a whim...

  3. IceC0ld

    YEA, I don't see this one aging well :o)

  4. Peter Prof Fox

    Fraud

    The word they're looking for but daren't use is fraud. That's why the rest of us should be using it. Fraud doesn't mean '... for monetary gain. Deception with malice aforethought will do.

    1. katrinab Silver badge
      Meh

      Re: Fraud

      Are cartoons “deepfakes”?

      Special effects in movies?

      The difference of course is that they are supposed to be fictional.

      1. Lord Elpuss Silver badge

        Re: Fraud

        Cartoons, no.

        Special effects; quite possibly. It's already been used to de-age Harrison Ford and Arnie.

        1. b0llchit Silver badge
          Childcatcher

          Re: Fraud

          Cartoons, yes! They can be mistaken for irony, may actually be sarcastic or could be mistaken for real. There is no better reality than a created reality to tell a real created story.

          All for the children, of course.

    2. Anonymous Coward
      Anonymous Coward

      Re: Fraud

      The UK got a whiff of this last week when faked audio of Sadiq Khan circulated on social media. The police reviewed it, and decided that no offence had been committed. Deceptive? Yes. Malicious? Arguably. But not enough to meet the threshold for a criminal offence.

      https://www.bbc.co.uk/news/uk-england-london-67389609

      1. katrinab Silver badge
        Meh

        Re: Fraud

        Sure, but it has always been possible to do that with a voice actor. Rory Bremner for example, though in his case, it is always clear that he is doing parody.

  5. ChoHag Silver badge

    Don't worry.

    We'll ask the liars to tell the computer to tell you when it's lying.

    1. Anonymous Coward
      Anonymous Coward

      Like rfc3514 only this time they're serious

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like