The Register Home Page

back to article Brit lawmaker targeted by AI deepfake fails to get answers from US Big Tech

A member of the UK Parliament's lower house who was the victim of a deepfake AI campaign this week had a rare chance to confront the Big Tech executives who helped spread it. Their answers disappointed. Representatives from Meta, Google, and X stumbled, offered platitudes, and explained their respective policies, but did …

  1. elsergiovolador Silver badge

    Priorities

    Funny they put so much effort in a case affecting an individual, but will do nothing about rampant scams being pushed to unsuspecting users 24/7 that are in the grand scheme of things much more damaging.

    1. lglethal Silver badge
      Go

      Re: Priorities

      That's true, but here was a specific chance to hold the social media companies to account in front of a victim who happens to be a parliamentarian, in front of the entirety of parliament, in a setting where their usual course of just flat out lying would land them in front of court.

      If nothing else, it might just wake up the rest of the Nonces in Parliament, that they could be next and that the Social Media firms will not do a thing to help them. So maybe they will act. And in protecting themselves, they might just end up protecting the rest of the users.

      Getting such a message to finally penetrate the thick skulls of parliamentarians is no easy job after all...

    2. The Man Who Fell To Earth Silver badge

      Re: Priorities

      Time to hold these companies financially accountable just like "regular" companies. LA was a good start.

    3. rg287 Silver badge

      Re: Priorities

      Funny they put so much effort in a case affecting an individual, but will do nothing about rampant scams being pushed to unsuspecting users 24/7 that are in the grand scheme of things much more damaging.

      True... but.

      In terms of demonstrating harms in any meaningful way (e.g. in court), it's not enough to wave vaguely at a lot of stuff and say "bad". Not if it comes to judicial rulings or anything with teeth. You often need to get into the specifics of a single case to test the subtance of the allegations - which may then set a useful precedent for other cases or allow other cases to be expedited and bundled in on a "yep, same category" having checked the prima facie details.

      It's why class action or large cases sometimes focus on a John Doe as the test case.

      In this case, asking about misinformation in vague terms will garner a similarly vague response of "we have systems to minimise it", whereas being able to say "Why didn't you kill these defamatory deepfakes of this person who is sat in this room staring at you?" really lets them needle the respondents and demand specific answers and get into the nitty gritty on a meaningful basis.

      It's also why yesterday's ruling against Meta/YouTube in LA is so foundational - it's a major precedent that could affect hundreds of other ongoing cases.

    4. teebie

      Re: Priorities

      "Funny they put so much effort in a case affecting an individual,[...]"

      In fairness to the tech companies, they put fuck all effect into fixing this case.

      So it's not a case of two-tier enforcement, simply no enforcement.

  2. My other car WAS an IAV Stryker

    End of satire?

    "My instinct is to pass a very simple law that somebody's identity belongs to them and cannot be stolen, used, misappropriated, whatever the purpose… You should go to bed a night not fearing that in the morning, you find a deeply damaging, disruptive and dangerous misrepresentation of you."

    Since this is UK, the US First Amendment doesn't apply. Does this mean saying goodbye to any impersonation, even Auto-Vox, for comedy purposes? Maybe there needs to be a carve-out to protect a certain level of speech.

    Deepfakes that are crafted to pass off as serious would (and should) definitely be illegal. I wish all public lies were illegal, not just when under oath.

    1. Dan 55 Silver badge
      Flame

      Re: End of satire?

      We're here because Big Tech made a technology which spewed out images and videos but no way of identifying them as such - any watermarking they added as an afterthought was easily removed.

      If it sounds familiar it's just like how they set up social media empires completely unsuitable for children but with no effective parental controls, and now we're all in this age verification hell where governments have to mop up after the fact but because they aren't in the technology business and don't know how to legislate for it, we've ended up with people's ID being uploaded everywhere and then leaked or used as more data for the advertising industrial complex.

      Their modus operandi is to keep pushing until they break yet another part of society. It's called disruption.

    2. rg287 Silver badge

      Re: End of satire?

      Since this is UK, the US First Amendment doesn't apply. Does this mean saying goodbye to any impersonation, even Auto-Vox, for comedy purposes? Maybe there needs to be a carve-out to protect a certain level of speech.

      People already have a right to their likeness, which is why commercial photographers get model releases, specifying what the images will be used for and any limitations. These rights are generally focused around commercial exploitation of the image or likeness, so do not preclude a "fair use" defence for purposes of journalism or things like satire (Thatcher is hardly likely to have signed a model release allowing Spitting Image to use her likeness).

      We also have libel and slander law where stating things that are untrue about a person (or republishing such) may be an offence. Although in this case, the social firms may get off under the "Innocent Dissemination" defence offered to operators of websites in the Defamation Act 2013.

      It's not a big step to look at an "impersonation" offence, which doesn't include satire or similar "fair comment", but basically bulks out libel law to explicitly include an attempt to deceive, perhaps tightening regulation on websites to say it's not enough to add community notes - they really have to take stuff down.

      1. Strahd Ivarius Silver badge
        Facepalm

        Re: End of satire?

        The "social" firms are actively promoting deep fakes because it earns them more engagement from the users they addicted to their filth.

        Nothing like "innocent dissemination"...

      2. Dagg

        Re: End of satire?

        (Thatcher is hardly likely to have signed a model release allowing Spitting Image to use her likeness)

        From what I understand Thatcher actually enjoyed her portrayal on spitting image. Especially the interactions with her cabinet.

        1. rg287 Silver badge

          Re: End of satire?

          Yes, I believe she did - but they wouldn't have been able to make Spitting Image if she and everybody else depicted had been required to sign releases for their likeness.

          It only exists because that fair use principle for fair comment & satire exists.

      3. nonpc

        Re: End of satire?

        "fair use" defence for purposes of journalism or things like satire"

        Given that 'responsible journalism' is as big an oxmoron as 'truthful politician' even without the unrestrained social media influencers with their own truths, how would use for journalistic purposes be deemed a fair use?

    3. Tron Silver badge

      Re: End of satire?

      quote: I wish all public lies were illegal.

      That would be the end of politics, not satire.

      1. My other car WAS an IAV Stryker
        Go

        Re: End of satire?

        Tron: "That would be the end of politics, not satire."

        GOOD!!! (Right?)

        1. cyberdemon Silver badge
          Coffee/keyboard

          Re: End of satire?

          If only it could be classified into Good or Bad

          The real problem, I fear, is that democracy is being entirely dissolved, because people are no longer thinking independently for themselves. The vast majority spend their time scrolling through what they may believe to be an unordered list or chronologically ordered list of posts, perhaps from people they are freinds with / have subscribed to.

          Unfortunately, Social Media (and now so-called "AI") mean that nobody sees the same thing - everyone sees a manipulated list, adjusted in real-time, of content that the tech companies believe will create a) the most engagement for that particular person and b) whatever their "real customers" i.e. advertisers and increasingly political parties, want individual people to see.

          Nobody sees the same list of videos or ads on TikTok, YouTube, X, Facebork et al - they see whatever is optimal to boost their engagement (ie ragebait etc) or to push them in a given (N-dimensional, not just left/right) direction, dictated by paying (i.e. "Davos class") customers.

          And that's not even to get stared on LLM chatbots, which people are using almost as an offload of thinking/reasoning/research, with no idea how they could be manipulated, now or in future.

          I imagine if George Orwell could have seen the world today, he would have vomited all over his typewriter.

  3. EricM Silver badge

    "Social" networks thrive on misinformation...

    They sell their users eyeballs to advertisers, so what they need is their users engagement. Keep them scrolling, keep them online, consuming one post after another.

    Emotions create engagement.

    Negative emotions create more engagement than positive emotions.

    Negative emotions are also more easy to create at scale.

    Furious users are the most engaged.

    How do you create furious users? Promote the most controversial messages or the barely legal verbal attacks or images. Tell them lie after lie after lie about each other.

    In short :

    I guess it's nothing personal or political in the end, but causing mayhem in politics and society is just a very effective means to maximize profits for the operators of social networks.

    1. The Man Who Fell To Earth Silver badge

      Re: "Social" networks thrive on misinformation...

      Is there anything real on Reels? I doubt it.

      1. EricM Silver badge

        Re: "Social" networks thrive on misinformation...

        True, but a strong emotional reaction will in nearly all cases override the more rational "hey, is this real?"- impulse.

        That base mechanism has a pretty dangerous mob-forming potential.

  4. LessWileyCoyote

    "My instinct is to pass a very simple law"

    Very simple laws + lack of tech knowledge may mean Unintended Consequences.

  5. munnoch Silver badge

    "it was down-ranked"

    Well that's alright then. Specifically esigned to mislead and confuse, no societal benefit possible, but still goes in front of 10 - 20% of the number of users it would if it were accurate.

    How well would a newspaper get on if it flat out made shit up and put it into 1 in 10 print impressions? Yeah ok, I'll get my coat...

    I hate this version of the future.

    1. Korev Silver badge
      Thumb Up

      Re: "it was down-ranked"

      I was about to make this exact point

    2. I ain't Spartacus Gold badge

      Re: "it was down-ranked"

      The obvious answer is to make them publishers. The internet is no longer tiny, and desperately in need of legal protection. So we just make all the big players publishers, in any case where they have promoted content, pushed it into users' feeds or put it on the front page.

      We can't make them responsible for all posted content. But we could force them to give proper reporting tools, and then make them legally liable for anything they then choose to leave up. Obviously this then gives companies the chance to destroy all negative reviews, by spurious legal claims. But then if Youtube want to destroy their own business model by banning everything on first complaint - that's up to them. Someone else can replace them. You could also have a rule for persistent reporters, that a company can legally punish them for crying wolf - by ignoring/downgrading future complaints.

      So if I post something libellous - then that's on me. If someone complains, then the company will have to decide whether to nuke the post, or leave it up. If they leave it up, they're jointly liable, with me.

      However, if I post something libellous,and they put it into their general news feed (for clicks), then they are acting as my publisher, so are jointly liable with me from the start.

      They should have an easier legal test to pass. It shouldn't be their job to do deep research on what I say, they should only have to take reasonable steps to make sure it's not obviously illegal or libellous. the more effort they spend promoting it, the more checking they should have to do.

      I don't know if the publishing thing should count for followers though. If people have actively chosen to "follow" my posts, I'm not sure if that should count as publishing. I think the responsibility should remain with the poster. Though, if the poster regularly posts illegal/libellous content, then maybe you could argue the company become jointly liable again.

      This would harm the platforms and change useful public service content - but I'd argue that Facebook and Google make massive profits - and can afford to clean up some of their messes.

      I also think this should be extended to adverts. Most internet companies (and here I include The Register itself) seem to take the attitude of we just get ads from our partners - we know nothing. In general the El Reg ones we've complained about have been video ones, or ones that expand to cover the whole screen, or bring the browser to its knees. But maybe 20% of the ads you see on YouTube are obvious fraud. Similarly, when I had a Facebook account, maybe half the ads in the sidebar were for you've won an iPad/car/house as the millionth user to be shown this ad. I guess I didn't get the well-paying advertisers, as I'd given them too little personal data. Some scams are unavoidable. But many are fucking obvious frauds, if these companies gave a fuck about their users, and didn't just take the ads automatically. So we could make them manually check them, or suffer the consequences, as with TV, or we could force advert reporting tools on them - every ad must have a report obvious scam/fraud/illegality button with. Again with penalties for users who abuse the system and just report every ad.

      None of this is new. It's just making the big internet companies follow the same laws everyone else already has to.

    3. Androgynous Cupboard Silver badge

      Re: "it was down-ranked"

      Try running a video claiming "Zuckerberg is an X" for some suiitably damning X, and see how long it stays up there.

  6. Will Godfrey Silver badge
    Mushroom

    Needs serious financial penalties

    ... and I mean SERIOUS

    Nothing less will make any difference.

    1. BebopWeBop Silver badge

      Re: Needs serious financial penalties

      Prison terms might be more effective.

  7. Pascal Monett Silver badge

    "struggle to explain"

    Not at all.

    They very successfully pretended to have trouble explaining, which means that, when they left the hearing, they were mentally high-fiving each other on another sand bank skirted without issue.

    After all, if the UK starts getting uppity, they can very well simply declare that they can no longer offer their services in the UK.

    It's not like they'll miss a few thousand users per hour.

    1. Excused Boots Silver badge

      Re: "struggle to explain"

      "After all, if the UK starts getting uppity, they can very well simply declare that they can no longer offer their services in the UK.”

      We can but hope....!

  8. Andy The Hat

    A dfifferent approach

    I may have a different view on this.

    If a newspaper published this material they would be liable as they are the source of the material.

    If I published false information I would be liable as I was the source of the material.

    The problem for meta et al and the Government is that if I publish material on Google's site, who is liable?

    There have been arguments about this for years.

    If the Government had the balls to issue a decree and *either* go after the media companies or go after the individuals producing the material there would be recourse.

    At the moment we're in no man's land where the government doesn't want to scare the big social media companies (think of the post-politics directorships ...) but think it's too hard to chase those actually producing the material so are just staring at oncoming headlights and doing nothing significant.

    We need a middle road - the media companies should be legally mandated to take down material, not at the simple request of one individual but through a system to prevent vociferous removal claims, and then those media companies need to be providing user data to the authorities *by warrant* to facilitate court proceedings against those individuals posting false materials. If the companies claim not to be the source publishers then there are no journalistic restrictions on source data ...

    1. Herring`

      Re: A dfifferent approach

      My understanding is that social media gets a pass because they are not a publisher. They are just showing what people post. However, since they are managing what people are shown, my feeling is that this could as editorial control and they ought to be treated the same as a publisher. I can't be arsed to add underlines and italics.

      I haven't been onto Facebook for years, but in the early days, it would show you what your friends posted in chronological order - which seems fine. When I gave it up, it was just pushing crap at me - which is not fine.

      1. Paul Crawford Silver badge

        Re: A dfifferent approach

        Simple solution is to make the medial liable, but they can pass the fine on after paying it to the originator and get their money back that way.

        Oh dear, is that not in their business interest? How sad.

    2. doublelayer Silver badge

      Re: A dfifferent approach

      We may need it, but there's a lot more difficulty trying to specify that let alone implement it. There is far too much promise in being able to control what people say online, who says things, and where they say it. Mandatory removal is a powerful tool for silencing people, so you need strong restrictions on it because there will be people whose entire job is finding ways around that.* Big platforms tend to quite like risks that apply to everyone because it makes it harder for anything small to compete with them because they don't yet have the cash to have a big team of lawyers whose job is not to comply with the law but to delay or redirect any accusation that they haven't. Mandatory reporting of source user data already exists, but it either still means a full investigation as when police get the data that exists or mandatory collection of much more useful data so the authorities have an easier time with resulting losses in privacy.

      * There are already companies whose entire job is dealing with stuff online you don't want to appear. Some of their biggest tools are using SEO to try to prevent people from finding it, but they have innovators who try things like arguing that quoting someone by showing a clip of a video is illegal because of how the video was copied off YouTube and suing them for this. Any mechanism you set up that gives them a "delete this because I don't like it" button is going to be heavily abused and no website has much motivation to resist, the same way that they don't have a motivation to investigate right now.

  9. StewartWhite Silver badge
    Stop

    If it's you it's fair comment. If it's me as an MP then it's offensive and MUST be removed !

    As ever with politicians, they couldn't care less if it affects us sans-culottes but if it affects the nobility or an "important" member of the bourgeoisie such as MPs then "It's a gross invasion of privacy and must be stopped forthwith".

    Then these self-same individuals with all the self awareness of a pet rock have the gall to complain that the public are voting for the "wrong" parties in the shape of the populists. Whilst IMO only a fool or a knave would vote for Nigel "I don't even know where Clacton is" Farage or Zach "I can hypnotise your breasts bigger" Polanski I get where the public are coming from regarding the rejection of the status quo.

    Vote Count Binface, you know it makes sense!

    1. PinchOfSalt

      Re: If it's you it's fair comment. If it's me as an MP then it's offensive and MUST be removed !

      Is that entirely fair?

      You're ascribing a motive for the MPs behaviour, when I doubt you've ever met her.

      As is often the case, what affects others is of lower priority to that which affects you. This is true for MPs as they largely have to respond to the loudest problems, not the most important. This is the disadvantage of a 5 year term. When something directly affects them, then I suspect they then get more visibility of the breadth of that problem which has otherwise been hidden away in the noise.

      You're right that people are frustrated with the status quo. However, the level of misinformation that is present on social media is so high about what is and is not working, then the only thing that can happen is to make people more frustrated and / or angry. Either angry about how much misinformation these is, or what is in the misinformation itself.

      It's important to note that these companies only exist because they have algorithms that make people addicted to them. That addition is rooted in frustration and anger. They don't care which politicla party wins really, so long as they can keep people frustrated and angry, they win.

      As a comedian pointed out, the trouble with the Internet is that it allows stupid people to gather together in crowds, whereas in the past, they would have been moderated by the more knowledgable around them.

  10. Rich 2 Silver badge

    The UK is not the USA

    “ A member of the UK Parliament's lower house…”

    …is called an MP. Not a “lawmaker”

    1. doublelayer Silver badge

      Re: The UK is not the USA

      A member of the US's lower house isn't called a lawmaker either. They would be a "representative". Lawmaker, however, is a generic term for a person who can make laws, which does include MPs, including the one in this article whose proposed solution is for them to make a law.

    2. Fruit and Nutcase Silver badge

      Re: The UK is not the USA

      "Laws are like sausages, it is better not to see them being made"

      Lawmakers - best not see them at work

  11. Acrimonius

    First to use social media for propoganda

    So they cry foul but when they use the social medai to spread their lies to the masses it's ok.

  12. Tron Silver badge

    Relax everyone.

    They will phase out social media, block foreign websites that are not licensed in the UK, and then require licensing for UK websites. Within a decade the internet will work like Ceefax. One step at a time so the nudge units can get the proles onside. Now get your papers in order so you can update your disturbingly dissident Linux OS.

  13. Doctor Syntax Silver badge

    I suppose they might well think "If Starmer can get away without answering at PMQs so can we".

  14. Winkypop Silver badge

    It’s easy folks

    Just don’t touch any of their confected shite.

    Don’t engage with garbage.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon