back to article Microsoft teases deepfake AI that's too powerful to release

Microsoft this week demoed VASA–1, a framework for creating videos of people talking from a still image, audio sample, and text script, and claims – rightly – it's too dangerous to be released to the public. These AI-generated videos, in which people can be convincingly animated to speak scripted words in a cloned voice, are …

  1. Version 1.0 Silver badge

    "speak scripted words in a cloned voice"

    Low Hell, Mood Gorning AI ... Oh sorry AI, you don't understand spoonerism?

    1. Anonymous Coward
      Anonymous Coward

      Re: "speak scripted words in a cloned voice"

      Could have saved Microsoft a couple of Billion dollars if they'd given me $50.


  2. Badgerfruit

    "How will anyone regulate who uses this for the right reasons?"

    Given something simple such as a pair of ladders could be used for crime, there is absolutely zero chance this will not be used for nefarious purposes.

    Regulate, make laws etc all you want but if the fear of the death penalty doesn't stop horrific crimes, these new laws won't be any deterrent.

    Open Pandora box and there is no going back. I can already see the Old folk being conned in their millions by their "relatives" asking for money.

    1. NapTime ForTruth


    2. steviebuk Silver badge

      They can regulate all they want but we've already seen the idiot orange one (Trump) using it during his campaign.

    3. Civilbee

      Regulation versus Damage control

      Regulation indeed will be very challenging. That however doesn't mean we have to just let it roll over us.

      To start with, there are two basic options:

      A) To allow every single use of this technology that is not explicitly forbidden, and to allow everything that is able to weasel out of the exact wording of what is forbidden by law. If damage from the bulk usage of the technology piles up, government then can start research done by experts that more then a bit overlap with employees of the providers of the technologies for one or more years. After that they can debate in congress, senate and other parliaments for one or more years. Then comes the phase of special interest groups to give their input on the subject, after which poorly worded legislation with plenty of loopholes can be voted. Then if the law gets broken, small fish gets fried quickly and big players can get a few years of relative safety before investigations start. After that comes years of trial and appeals of state lawyers versus the best paid lawyers in the world. Everything ends with either a slap on the wrest or the state having to pay a high compensation to the tech company because there were some procedure errors in the first fine the court gave the company.

      B) Allow a few clearly thought about and very well defined explicit usages. Forbid all other usages and make it unlawful. New usages need to go through a rigorous process of approval.

      I am more then aware option B) has severe shortcomings. And it impedes technological innovation. Given the barrage of damage option A) is bound to create, I find it more then worthy to start looking into the possibilities of option B). We have seen what sort of responsible behaviour big tech companies have shown to users, privacy, competition laws, tax laws... in the recent past.

      That leaves us with a third aspect, independent needed on top of either A) or B):

      C) Stringent, well written and seriously enforced privacy laws. Why? The damage that this technology for generating a fake video from nothing but a picture and a short sample (less then a minute with some technologies) of voice will do will be multiplied if "players" have copious amounts of gathered information. If for example an attacker knows what shirt you recently bought, what your current location is, what activities and health problems you recently had, how you moan if things go bad... on one hand and if the same attacker knows the financial state of your parents, when they won't be able to physically check to meet you in person, what arguments will persuade your parents better then generic arguments, when and how you helped them out when they needed your help, when they were wrong about cybersecurity and you were right so now they needed to trust you as this time it is real... then your parents are as good as sitting ducks no matter how well you tried to educate them.

      One may think it's just a matter about being sharp yourself and educate your parents, but one forgotten detail when educating them or one moment of weakness of them and it's over. Remember that many attackers will have millions of detailed records on previous attempt to influence people to data mine and throw through some "AI" to learn from. With my level of skill, I do not expect to avoid every single attempt. People with less skill, those are unfortunate sitting ducks.

      Just advising users to be careful with what information they post just won't do it. Data harvesting is rampant and still on a sharp rise. Soon you likely won't be able to walk the streets without cars with "somewhat self driving abilities" to stream much of the video their cameras continuously make "in order to improve safety" back to the mothership while also data mining it for "commercial use" (including using already existing technology to lip read).

      Now we are at it, add:

      D) Make strict laws with real sanctions for failing them when it comes to hardware, software and services security. Devices and software riddled with security holes will be another big amplifier of the damage this deep fake technologies can do. It can allow easier covert installation of spyware and allows to activate communication software, circumvent filter lists of who can get through and even pop up look alikes of popular communication software.

      Unfortunately, the "Brave New World" seems to be around the corner and it's laughing at and with us.

    4. Wade Burchette

      I am most worried about governments using deepfakes than people. Consider:

      You have a particularly annoying politician or advocate. This person has a lot of people listening to him and if he is allowed to continue, it will be a serious threat to the current government's power and money. Mr. Smith Goes to Washington showed us that politicians have no qualms about inventing a scandal to destroy someone dangerous to their ivory tower. A vocal critic will have plenty of audio and visual source material for a deepfake to draw from. The government uses that to create a video of the thorn in their side doing something bad. The government then mobilizes their propaganda people in the media or social media to make sure the fake is spread far and wide. Outrage follows. And then the government uses these video to put this innocent person in jail for a long time.

      What is to stop this same government from create fake videos of you? Say something they don't like to a friend -- find yourself in court because you were arrested for statuary rape with the video evidence to back it up.

      I am far more worried the government will use this more than scammers. I have seen multiple presidents in the US openly defy the law little consequences. A court rules against them, they carry out the action anyway. I am not just referring to Donald Trump or Barack Obama or George Bush Jr. Joe Biden recently openly admitted that he was defying the US Supreme Court over student loan forgiveness. And it is not just the US. Look at Russia, for instance. The point being that even if you make a law that prohibits the government from making deepfakes to prosecute political enemies does not mean they will obey that law. The law is something peons must obey, not them. I am more worried about governments using deepfakes because you could find yourself silenced in prison just to help preserve the current regime's power.

      1. Cav Bronze badge

        "Joe Biden recently openly admitted that he was defying the US Supreme Court"

        You have a poor understanding of the law. The US Supreme Court is not the law. It interprets the law as defined by congress and the constitution. Defying it is not breaking the law if the president's actions are legal. The constitution allows the president to undertake Executive Orders. "The executive Power shall be vested in a President of the United States of America." EOs are instructions to the executive branches of the government from their superior.

        Article 2 of the constitution grants the president a great deal of leeway and discretion. All presidents have used EOs and it is perfectly legal to do so. The solution to a president behaving illegally is the process of impeachment.

  3. Alien Doctor 1.1


    they could do a video of SatNad promising to simplify licensing terms, return the start menu to a usable state, stop advertising in windows and make win11 suitable for all users without needing hardware upgrades and onerous tracking/transmission of all one does and uses.

    1. 43300 Silver badge

      Re: perhaps...

      That wouldn't be a very convincing fake though, would it? Everyone working in IT would realise immediately that this couldn't possibly be real!

  4. yetanotheraoc Silver badge

    Shared skepticism

    "they threaten to undermine the shared sense of reality that underpins our society, creating a climate of uncertainty and skepticism where citizens are left questioning the veracity of every piece of information they encounter."

    A shared sense of reality is some troubling philosophical shit, but let's leave that aside. To my way of thinking the world would be _better_ if citizens questioned the veracity of every piece of information they encountered. Or at least move the needle from 100% trust to somewhere in the middle, where they are at least capable of asking "Is that true?"

    1. 43300 Silver badge

      Re: Shared skepticism

      Yes, quite, rather than the current sitation:

      BBC - most people believe it unquestioningly

      Guardian - most on the left believe it unquestioningly

      Telegraph - most on the right believe it unquestioningly


    2. Filippo Silver badge

      Re: Shared skepticism

      Questioning is good.

      Problems arise when people trust everything from sources they like or from sources that confirm their beliefs, and trust nothing from sources they dislike or that challenge their beliefs. This is insidious because it looks like questioning, but actually isn't.

  5. NapTime ForTruth

    All but literally on script:

    "Gee, the lack of humility before nature that's being displayed here, uh... staggers me."

    "Don't you see the danger...inherent in what you're doing here? [Technological] power is the most awesome force the planet's ever seen, but you wield it like a kid that's found his dad's gun."

    "...your scientists were so preoccupied with whether or not they *could"* that they didn't stop to think if they *should*."

    "God help us, we're in the hands of engineers."


    We are a bloody stupid species.

    (For the inexperienced, the quotes above are from the original "Jurassic Park", a movie about the inevitability of disaster when humans act through their arrogance and greed instead of their - admittedly rare - wisdom and intelligence. The movie also mentioned dinosaurs.)

    1. Anonymous Coward
      Anonymous Coward

      A "Happy Love Day!" to everyone in Jurassic Land then ... from a talking Tyrannosaurus rex dinosaur, in AI purple!

    2. LionelB Silver badge

      Speaking as a scientist, that's why we have ethics committees. Also speaking as a scientist, I'll admit to feeling slightly safer in the hands of engineers* (but wait... do they have ethics committees?)

      *Proper enginerrs, not software engineers. I also moonlight as a software engineer.

    3. Cav Bronze badge

      "a movie about the inevitability of disaster when humans act..."

      No, there was nothing inevitable about the events in Jurassic Park. Malcolm's comments on the matter have always annoyed me. He was wrong. It was a contrived situation to make a movie. Of course, without that contrivance it would have been a boring story. However, nothing failed at the park. The disaster resulted from an IT insider deliberately sabotaging the facilities. So, yes, his greed caused the problem but otherwise the corporation that set it all up made no mistakes.

  6. ChoHag Silver badge

    The human brain: wired to believe what it sees and hears. Because humans never had to deal with liars and cheats until we got computers.

    1. 43300 Silver badge

      The advanced tools available to liars and cheats, and the international reach they now have, are largely products of the past few decades, though.

    2. Neil Barnes Silver badge

      Wired to believe what it sees and hears because those faint stripes in the bush might have a tiger attached to them...

      There is no way we can solve this one, save perhaps the Aliens option --->

    3. Pascal Monett Silver badge

      I wonder what computer Machiavel had . . .

    4. Filippo Silver badge

      You are correct in that lies are not a new thing, and the human brain is not at all wired to believe everything.

      However, we do get a whole lot of influence from the society we are raised in, and up until very recently, photos and videos were relatively expensive to fake convincingly, and faking a specific person was extremely difficult.

      Eventually, society will adapt, and everyone will just "know" that if they see a politician screwing a chicken on a social media post, it's almost certainly fake.

      Until then, though, we should be aware that the environment has changed since our "gut feeling" was trained, and that we ought to apply more conscious effort (which, frankly, is an overall good idea anyway).

  7. Anonymous Coward
    Anonymous Coward

    If it's better than Designer

    It'll still be pretty naff.

  8. that one in the corner Silver badge

    The implications for personalizing emails

    are utterly horrifying, the implied threat of receiving videos instead of proper plain text emails that have been personalised by going through good old mail merge.

    1. Inventor of the Marmite Laser Silver badge

      Re: The implications for personalizing emails

      Doesnt worry me. Straight to trash.

    2. Filippo Silver badge

      Re: The implications for personalizing emails

      Oh, my rule of thumb on video emails is going to be the same as audio messages: if it's from one of my customers or from family, it gets the lowest priority and a polite warning to avoid doing this in the future; if it's not, it gets the bin.

  9. druck Silver badge

    Microsoft just ensuring...

    ...they get in first to make the tools that will be performing the Microsoft scams from now on.

  10. Khaptain Silver badge

    Why are they even making these tools

    What was the objective that initiated the project ?

    It's like someone came up with the idea, "Can we build a gun that's kills better than all others" and then says ooops sorry we shouldn't have done that.

    1. HuBo

      Re: Why are they even making these tools

      Well, this evening, on the France 5 TV channel, they will show interviews of Iranians (in Iran) who are under 25 years old and criticize the Authoritarian Theocracy (Islamic Republic). Rather than fuzzifying their faces, they use AI to anonymize them so that they look different and avoid repression (France TV site, HuffPost site). It seems to me like a useful application of this tech. These kids are hopefully the future of Iran:

      "You are so angry against the regime that you dream of beating their face!" (Sarah)

      1. Khaptain Silver badge

        Re: Why are they even making these tools

        I can't for a moment understand why that's any better than fuzzying the faces... Imagine the damage of the AI face actually looked like a real person, but someone else not the protestor, and the Iranian Regime came to speak to them and their familiers by mistake

        1. HuBo

          Re: Why are they even making these tools

          I'd say it promotes a more direct visual interaction with the audience. Probably more in tune with the narrative of this program where the clerics want women to hide themselves under layers of approved clothing, much against the battle cry "girls just want to have fun" and related "women, life, freedom" movement. The State would have to be rather daft to try and use those AI faces in search-and-persecute missions IMHO (they looked a bit like Kim Kardashian derivatives to me).

      2. druck Silver badge

        Re: Why are they even making these tools

        What's wrong with showing them in silhouette, as has been done since long before computers.

        1. HuBo
          Thumb Up

          Re: Why are they even making these tools

          It seemed like a more effective counterstance to a regime that seeks to regiment feminine expression IMHO.

    2. david1024

      Re: Why are they even making these tools

      Easyanswer is that melineals and genZ don't want to work, there aren't enough genX, so we are going to get digital doubles that work for just the cost of hardware and electricity. They aren't going to fake real folks, they are going to make puppets and not pay any talent.

      Humans are going to be obsolete. I think we are already counting down to the first digital only influencer. May have already happened.

      1. Khaptain Silver badge

        Re: Why are they even making these tools

        And you believe that Max Headroom can plough fields in the morning, code in the afternoon and put baby to bed in the evening and all that for free.

  11. xyz Silver badge

    And you'll still have to wait...

    Until it loads and then updates and the interface changes, like a sort of "plastic" Teams.

  12. Dinanziame Silver badge

    Only a matter of time

    There is practically no technology that can be prevented from existing. It's good that they are not releasing it, but it's good that they can demonstrate this is possible and there was no point in stopping themselves from creating something so dangerous — what they did is not that hard, it is not a mind-blowing advance on what other people are doing. At the most we are one year away from the same technology being developed and actively used by bad actors.

    1. Pascal Monett Silver badge

      Re: Only a matter of time

      They're not releasing it yet.

      They will, in some form or other.

      Besides, you're right : it's only a matter of time before someone else does the same and releases it. On the one hand, there's a lot of fun to be had. Upload your favorite clip and dub everyone in Darth Vader's voice. Could be a hoot. Unfortunately, the possibilities for mischief are literally boundless.

      That won't stop 'em from releasing it though.

  13. Blackjack Silver badge

    Not released for now you mean.

    Didn't we already heard this song and dance about generative AI before only for the thing to be eventually released?

  14. PeteSzerszen


    Just imagine if they put half of this effort into Windows.

    1. Mark White

      Re: Priorities

      Just imagine if this was 90% of Windows 12

  15. fg_swe Bronze badge

    The FIX

    Cryptographic Signing of all Multimedia Material. Similar to digital software signing.

    Existing tooling+organizations such as CAs can be used for this purpose.

  16. johnrobyclayton

    My name is not Thomas Covenant

    But I am quite happy to be The Unbeliever.

  17. steviebuk Silver badge

    Too powerful to release

    I don't see the point in powerful AI. Powerful enough that you can get it to clean your house, do the hoovering, washing and dinner but nothing more. Keep them basic. Currently watching Humans (which I'm struggling with due to Colin Morgan's piss poor acting). Because they are human form people are refering to them as "slaves" so why don't we then consider washing machines, dish washers, the robots that build cars etc as slaves. If you gave them general inteligency that it would make the point of them pointless. The point being you can use them as a tool, that can work 24/7 (inbetween charging) won't require holidays, won't require leave etc etc. If you give them general inteligency, then we'll be back to square one. A whole bunch of bots that want rights and who won't want to do the above. The whole point of making them is so you can put them in dangerous jobs, making them "powerful" in a general inteligency sense would be pointless.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like