The beta test wasn't good
The text to speech was crap and the text read like a concussed amfM1.
The rendering of the character graphic was pitiful - he looked like a kids first attempt at blender with the weird stance and crazy skin color.
Foreign adversaries are expected to use AI algorithms to create increasingly realistic deepfakes and sow disinformation as part of military and intelligence operations as the technology improves. Deepfakes describe a class of content generated by machine learning models capable of pasting someone's face onto another person's …
There has also been disturbing progress in your AI, considering where it was a few years ago. If it starts producing better posts than anything I'm capable of I will surrender to the machines and volunteer to be plugged into the Matrix. Hopefully I can be someone important, like an actor.
In general people are poor at evaluating the provenance and trustworthiness of information. That's true for pretty much everyone in the general case. Some people frequently make a significant effort at evaluating the quality of some of the information they're exposed to; that's about the best we've ever done with the push for "critical thinking".
Evaluating information quality has a large cognitive burden, and also carries opportunity costs – you can only think about so many things in a given interval. So our minds have to make snap judgements most of the time, and the cues people use (which vary, particularly with neurodivergence, but there are a lot of commonalities) can be discovered and instrumentalized.
Photographic and film evidence has always been unreliable, particularly when it shows something the audience wants to believe. Look at the Cottingley fairies, and how those photographs – which most people today would identify as obvious fakes – fooled intelligent but credulous observers such as Doyle.
I'm not much of a conspiracy type but as soon as anyone mentions "enemy states" to me it automatically makes them suspect in my eyes. Because, alas, we're as or more likely to be duped by our own government than we are by some enemy, real or imaginary.
(Its just that in my lifetime I've been lied to so many times by officials and politicians that I now tend to think of most politicians as "shallowfakes". If I'm not going to follow a shallowfake then a deepfake isn't going to make much of an impression.)
I'd have rather written: if someone wants to believe everything that his government says, he will do so, regardless of the amount of proof that will be presented to him.
Foreign adversaries ... enemy states ... governments around the world ... defense and intelligence agencies
if these words don't ring all alarm bells, then I don't know what could.
More people buy into myths if there is "evidence" fabricated.
Classic examples are the photo-doctoring to put Lenin and Stalin side by side.... Or deleting Trotsky.
Plenty others available, a cursory flick around faecesbook (particularly the right wing of it) turns up load of "fake news".
Is a state which pioneers and conspires with corrupt media to spread dynamite disinformation creating a phantom victim for outrageous unwarranted attack a rogue enemy state and certifiable terrorist organisation threatening human civilisation and populations ‽ .
Who/What/Where is revealed to be the real ACTive enemy of truth and mankind in this tale with its strings of guilty admissions ....... https://www.zerohedge.com/political/ex-top-intel-official-signed-hunter-biden-laptop-disinfo-letter-despite-knowing-it-had-be ...... and to what ultimate aim is such a nonsense employed and deployed?
Don't such rogue wannabe fascist states realise ACTive disinformation campaigns are nowadays, in these times and spaces of Oday vulnerability exploitation and expansion and SMARTR Trojans, catastrophically self-destructive in fields of AI interest and remote virtual engagement?
Not a good tribute, but a good example of why spotting deep-fakes isn't dependent on their rendering quality. Yeah, there will be subtler red flags than unblinking eyes, impossible hair or hands straight out of a anxiety dream or night terror. You don't need them most of the time.
Really, metadata provides a better solution, and it's all off the shelf technology and a few firmware updates. Sign your videos kids.
Pretty hard to fake that without a public key compromise, and the actual source of a video can still re-sign and repost stuff in the event of one.
But lots of nation states already have capability to use existing, more manually intensive techniques (as used in the film industry).
So real difference is speed / cost / less people needed, as "AI" improves it will allow smaller entities (not just larger nation states or well resourced non nation state groups) to produce more convincing fakes & so potentially increase level of disinformation swilling around.
You are correct, but deep fake tech is already hitting the diminishing returns level, so while the defects that are left are more subtle, they harder and harder to "solve" at this point technologically.
Also as I said elsewhere, the bigger issue that deep fakes have is that they have to credibly portray the target doing things that the target would credibly do to get anything more that the unthinking reactionaries to jump. That may still have enough impact to justify the attempts to some of these bad actors, but it's got a short fuse before the whole thing starts to fall apart, and after that there is likely to be blowback to factor in.
So this is just a new tool with most of the same limitations that may be deployed cheaper/rendered faster, but unless well planned and executed, won't be more or even as effective than what was possible before, and the more bad fakes are floated, the less people will trust them in general.