
"speak scripted words in a cloned voice"
Low Hell, Mood Gorning AI ... Oh sorry AI, you don't understand spoonerism?
Microsoft this week demoed VASA–1, a framework for creating videos of people talking from a still image, audio sample, and text script, and claims – rightly – it's too dangerous to be released to the public. These AI-generated videos, in which people can be convincingly animated to speak scripted words in a cloned voice, are …
"How will anyone regulate who uses this for the right reasons?"
Given something simple such as a pair of ladders could be used for crime, there is absolutely zero chance this will not be used for nefarious purposes.
Regulate, make laws etc all you want but if the fear of the death penalty doesn't stop horrific crimes, these new laws won't be any deterrent.
Open Pandora box and there is no going back. I can already see the Old folk being conned in their millions by their "relatives" asking for money.
Regulation indeed will be very challenging. That however doesn't mean we have to just let it roll over us.
To start with, there are two basic options:
A) To allow every single use of this technology that is not explicitly forbidden, and to allow everything that is able to weasel out of the exact wording of what is forbidden by law. If damage from the bulk usage of the technology piles up, government then can start research done by experts that more then a bit overlap with employees of the providers of the technologies for one or more years. After that they can debate in congress, senate and other parliaments for one or more years. Then comes the phase of special interest groups to give their input on the subject, after which poorly worded legislation with plenty of loopholes can be voted. Then if the law gets broken, small fish gets fried quickly and big players can get a few years of relative safety before investigations start. After that comes years of trial and appeals of state lawyers versus the best paid lawyers in the world. Everything ends with either a slap on the wrest or the state having to pay a high compensation to the tech company because there were some procedure errors in the first fine the court gave the company.
B) Allow a few clearly thought about and very well defined explicit usages. Forbid all other usages and make it unlawful. New usages need to go through a rigorous process of approval.
I am more then aware option B) has severe shortcomings. And it impedes technological innovation. Given the barrage of damage option A) is bound to create, I find it more then worthy to start looking into the possibilities of option B). We have seen what sort of responsible behaviour big tech companies have shown to users, privacy, competition laws, tax laws... in the recent past.
That leaves us with a third aspect, independent needed on top of either A) or B):
C) Stringent, well written and seriously enforced privacy laws. Why? The damage that this technology for generating a fake video from nothing but a picture and a short sample (less then a minute with some technologies) of voice will do will be multiplied if "players" have copious amounts of gathered information. If for example an attacker knows what shirt you recently bought, what your current location is, what activities and health problems you recently had, how you moan if things go bad... on one hand and if the same attacker knows the financial state of your parents, when they won't be able to physically check to meet you in person, what arguments will persuade your parents better then generic arguments, when and how you helped them out when they needed your help, when they were wrong about cybersecurity and you were right so now they needed to trust you as this time it is real... then your parents are as good as sitting ducks no matter how well you tried to educate them.
One may think it's just a matter about being sharp yourself and educate your parents, but one forgotten detail when educating them or one moment of weakness of them and it's over. Remember that many attackers will have millions of detailed records on previous attempt to influence people to data mine and throw through some "AI" to learn from. With my level of skill, I do not expect to avoid every single attempt. People with less skill, those are unfortunate sitting ducks.
Just advising users to be careful with what information they post just won't do it. Data harvesting is rampant and still on a sharp rise. Soon you likely won't be able to walk the streets without cars with "somewhat self driving abilities" to stream much of the video their cameras continuously make "in order to improve safety" back to the mothership while also data mining it for "commercial use" (including using already existing technology to lip read).
Now we are at it, add:
D) Make strict laws with real sanctions for failing them when it comes to hardware, software and services security. Devices and software riddled with security holes will be another big amplifier of the damage this deep fake technologies can do. It can allow easier covert installation of spyware and allows to activate communication software, circumvent filter lists of who can get through and even pop up look alikes of popular communication software.
Unfortunately, the "Brave New World" seems to be around the corner and it's laughing at and with us.
I am most worried about governments using deepfakes than people. Consider:
You have a particularly annoying politician or advocate. This person has a lot of people listening to him and if he is allowed to continue, it will be a serious threat to the current government's power and money. Mr. Smith Goes to Washington showed us that politicians have no qualms about inventing a scandal to destroy someone dangerous to their ivory tower. A vocal critic will have plenty of audio and visual source material for a deepfake to draw from. The government uses that to create a video of the thorn in their side doing something bad. The government then mobilizes their propaganda people in the media or social media to make sure the fake is spread far and wide. Outrage follows. And then the government uses these video to put this innocent person in jail for a long time.
What is to stop this same government from create fake videos of you? Say something they don't like to a friend -- find yourself in court because you were arrested for statuary rape with the video evidence to back it up.
I am far more worried the government will use this more than scammers. I have seen multiple presidents in the US openly defy the law little consequences. A court rules against them, they carry out the action anyway. I am not just referring to Donald Trump or Barack Obama or George Bush Jr. Joe Biden recently openly admitted that he was defying the US Supreme Court over student loan forgiveness. And it is not just the US. Look at Russia, for instance. The point being that even if you make a law that prohibits the government from making deepfakes to prosecute political enemies does not mean they will obey that law. The law is something peons must obey, not them. I am more worried about governments using deepfakes because you could find yourself silenced in prison just to help preserve the current regime's power.
"Joe Biden recently openly admitted that he was defying the US Supreme Court"
You have a poor understanding of the law. The US Supreme Court is not the law. It interprets the law as defined by congress and the constitution. Defying it is not breaking the law if the president's actions are legal. The constitution allows the president to undertake Executive Orders. "The executive Power shall be vested in a President of the United States of America." EOs are instructions to the executive branches of the government from their superior.
Article 2 of the constitution grants the president a great deal of leeway and discretion. All presidents have used EOs and it is perfectly legal to do so. The solution to a president behaving illegally is the process of impeachment.
"they threaten to undermine the shared sense of reality that underpins our society, creating a climate of uncertainty and skepticism where citizens are left questioning the veracity of every piece of information they encounter."
A shared sense of reality is some troubling philosophical shit, but let's leave that aside. To my way of thinking the world would be _better_ if citizens questioned the veracity of every piece of information they encountered. Or at least move the needle from 100% trust to somewhere in the middle, where they are at least capable of asking "Is that true?"
Questioning is good.
Problems arise when people trust everything from sources they like or from sources that confirm their beliefs, and trust nothing from sources they dislike or that challenge their beliefs. This is insidious because it looks like questioning, but actually isn't.
All but literally on script:
"Gee, the lack of humility before nature that's being displayed here, uh... staggers me."
"Don't you see the danger...inherent in what you're doing here? [Technological] power is the most awesome force the planet's ever seen, but you wield it like a kid that's found his dad's gun."
"...your scientists were so preoccupied with whether or not they *could"* that they didn't stop to think if they *should*."
"God help us, we're in the hands of engineers."
_____
We are a bloody stupid species.
(For the inexperienced, the quotes above are from the original "Jurassic Park", a movie about the inevitability of disaster when humans act through their arrogance and greed instead of their - admittedly rare - wisdom and intelligence. The movie also mentioned dinosaurs.)
"a movie about the inevitability of disaster when humans act..."
No, there was nothing inevitable about the events in Jurassic Park. Malcolm's comments on the matter have always annoyed me. He was wrong. It was a contrived situation to make a movie. Of course, without that contrivance it would have been a boring story. However, nothing failed at the park. The disaster resulted from an IT insider deliberately sabotaging the facilities. So, yes, his greed caused the problem but otherwise the corporation that set it all up made no mistakes.
You are correct in that lies are not a new thing, and the human brain is not at all wired to believe everything.
However, we do get a whole lot of influence from the society we are raised in, and up until very recently, photos and videos were relatively expensive to fake convincingly, and faking a specific person was extremely difficult.
Eventually, society will adapt, and everyone will just "know" that if they see a politician screwing a chicken on a social media post, it's almost certainly fake.
Until then, though, we should be aware that the environment has changed since our "gut feeling" was trained, and that we ought to apply more conscious effort (which, frankly, is an overall good idea anyway).
Well, this evening, on the France 5 TV channel, they will show interviews of Iranians (in Iran) who are under 25 years old and criticize the Authoritarian Theocracy (Islamic Republic). Rather than fuzzifying their faces, they use AI to anonymize them so that they look different and avoid repression (France TV site, HuffPost site). It seems to me like a useful application of this tech. These kids are hopefully the future of Iran:
"You are so angry against the regime that you dream of beating their face!" (Sarah)
I can't for a moment understand why that's any better than fuzzying the faces... Imagine the damage of the AI face actually looked like a real person, but someone else not the protestor, and the Iranian Regime came to speak to them and their familiers by mistake
I'd say it promotes a more direct visual interaction with the audience. Probably more in tune with the narrative of this program where the clerics want women to hide themselves under layers of approved clothing, much against the battle cry "girls just want to have fun" and related "women, life, freedom" movement. The State would have to be rather daft to try and use those AI faces in search-and-persecute missions IMHO (they looked a bit like Kim Kardashian derivatives to me).
Easyanswer is that melineals and genZ don't want to work, there aren't enough genX, so we are going to get digital doubles that work for just the cost of hardware and electricity. They aren't going to fake real folks, they are going to make puppets and not pay any talent.
Humans are going to be obsolete. I think we are already counting down to the first digital only influencer. May have already happened.
There is practically no technology that can be prevented from existing. It's good that they are not releasing it, but it's good that they can demonstrate this is possible and there was no point in stopping themselves from creating something so dangerous — what they did is not that hard, it is not a mind-blowing advance on what other people are doing. At the most we are one year away from the same technology being developed and actively used by bad actors.
They're not releasing it yet.
They will, in some form or other.
Besides, you're right : it's only a matter of time before someone else does the same and releases it. On the one hand, there's a lot of fun to be had. Upload your favorite clip and dub everyone in Darth Vader's voice. Could be a hoot. Unfortunately, the possibilities for mischief are literally boundless.
That won't stop 'em from releasing it though.
I don't see the point in powerful AI. Powerful enough that you can get it to clean your house, do the hoovering, washing and dinner but nothing more. Keep them basic. Currently watching Humans (which I'm struggling with due to Colin Morgan's piss poor acting). Because they are human form people are refering to them as "slaves" so why don't we then consider washing machines, dish washers, the robots that build cars etc as slaves. If you gave them general inteligency that it would make the point of them pointless. The point being you can use them as a tool, that can work 24/7 (inbetween charging) won't require holidays, won't require leave etc etc. If you give them general inteligency, then we'll be back to square one. A whole bunch of bots that want rights and who won't want to do the above. The whole point of making them is so you can put them in dangerous jobs, making them "powerful" in a general inteligency sense would be pointless.