Who? Fraudsters,conmen,neer do wells,Phishers. Turns out if i put my mind to it the list is quite extensive.
Cant think of any lawful ones though yet.
Artificially intelligent software can listen to someone's voice only a few times, and then speak just like them, like some kind of creepy cybernetic myna bird... according to a paper published by researchers from Baidu. This technology, when perfected, will be ideal for generating fake audio clips of people saying things they …
How long until this mysteriously starts happening to enemies of the US?
I wouldn't be a bit surprised if it started happening years ago. Just like I'm fairly certain that at least a few intelligence organizations knew about Meltdown and were using it long before it came to public light.
AI generated fake voice and fake video get lots of press coverage, but I do not see much in the press about AI generated hand writing. Spokesmen for Google and Facebook said much the same thing: no point - thanks to password re-use we already have the online banking passwords for half the western world.
I hunted down someone doing research into AI handwriting and after several pints, I got this snippet: "We do not make an effort to publicise out research because we are so well funded. We get a large number of small donations from people too lazy to check their bank statements."
almost all the use cases are creepy - the genie however is out of the bottle...
For same gender triplets, twins etc that ship sailed the day they were born. Theres already someone that looks & sounds like them, knows all their "secret" financial info - DOB, Moms maiden name etc.
"the genie however is out of the bottle"
celebrity impersonations already exist. I wonder if the tech could EVER compete...
There's that one Futurama episode when the prof's young clone built a voice impersonator using his father's voice - "Good news everyone! I'm a horse's butt" (episode 'The Route of All Evil' as far as I can tell)
And someone like Ferris Bueller could use this kind of tech to call in sick, and sound a lot more convincing.
All of the examples of Siri and Alexa that I've heard sound like robots (and vocaloids, too, unfortunately). Watson seems less so, but still 'robotish'. you can just "tell".
So I'd guess that impersonation tech may have the same kind of flaw for YEARS, maybe even DECADES, when "legit" tech can't even get it right. Genie out of the bottle? yeah, but he's a robot-sounding genie.
> "Your Honour must disregard
I received an email this evening from the local police warning of a spate of thefts of car number plates. The message goes on to say that criminals use these to escape speeding fines etc...
I bet this won't stop them insisting in court that the photo shows your registration number so you must be guilty.
The use-case mentioned about having an audiobook read with the same voice as a child's mother deserves to be filed under 'C' for 'Creepy'!
I can't really think of any use-cases which aren't either creepy or criminal. The criminal use-cases are fantastic though. How about I receive a voicemail or even a phone call from a friend asking me to transfer him some money? Video and audio evidence is going to be worthless.
We are going to have to introduce better technologies to prove someone is who they say they are, I suppose.
We are going to have to introduce better technologies to prove someone is who they say they are
Better technologies, like, you know, meet in person?
Another use of this voice cloing is that we start to develop better authentication and verification methods. Not only for interaction directly with a system but also for communication between meat sacks over systems. I sense the sweet smell of encryption all around*!
* also known as the control freak govemint's nightmare
I can't really think of any use-cases which aren't either creepy or criminal.
Sat navs. As well as choosing voice gender, it will become trivial (eventually) to choose the accent too.
Cortana/Siri/Whoever. Replacing their accent with a favoured grandchilds will put granny at ease when using them (not todays granny, but tomorrows).
Accent prejudice is a thing, at least according to The Economist , so the ability to change accent as well as language could become an important feature in AI adoption.
1 - https://www.economist.com/blogs/prospero/2015/01/johnson-accents
"How about a robot-sex-doll with the same voice as your wife?"
I don't know, I just associate SWIMBO's dulcet tones with various other types of demands: principally the making, without delay, of various caffeinated and alcoholic beverages... and being constantly told to put object x (shoes, gadgets, wires) away.
I read soemthign recently on the 'tubes compiled from conversations with terminally-ill children, and what came out what that they wanted to have fun, they wanted to see their pets, and they wanted to have their parents read to them (obviously while they were in bed). That last really can't be delivered by a mother or father's voice reading a e-book.
This is a special situation, of course, but I think it make the point. People have to be with people. Fake voices deliver...nothing.
Sign of a civilization in decline?
And, from a group (Reg readers) who purport to be technologically adept?
Here's a use case I put some work in to in 1990: Language learning. Consider how nice it would be if, when you're learning a language, you heard your own voice, as you hear yourself, speaking with a native accent in the new language.
Fellow reader, stop watching Hollywood post-apocalyptic zombie junk and channel your inner entrepreneur. You can come up with good uses for this tech.
"Consider how nice it would be if, when you're learning a language, you heard your own voice, as you hear yourself, speaking with a native accent in the new language."
I suspect what you'd hear is some horrible version of (for example) franglaise.
Yes, we can come up with some good uses for this tech.
But... the bad guys don't care about that. They only care about relieving us of our hard earned moolah by any means possible. It is up to us to protect ourselves from these pretty well inevitable threats to come.
Just like only a novice developer never includes any error handling in their code.
To Err is human, to realy foul things up needs a few million/billion computers.
If nothing in digital form can really be trusted, neither can anything in analog form, since a digital form can be used to produce an analog medium as well.
IOW, this could be the beginning of the DTA world (since we already know it's hard to trust a face-to-face conversation with someone we've never met before).
I doubt my childhood medical records exist in an easily discoverable way.
PS: I fell from the top bunk of a bed... Oh, darn it!
PPS: Mum is 80, and quite well thank you.
PPPS: I was 6 at the time, so things hadn't 'developed' in that department.
PPPPS: It wasn't my wrist.
Step 1: Find a video of your political opponent speaking.
Step 2: Change words to "Death to [INSERT DICTATOR HERE]".
Step 3: Lock them up for the good of public security / Alternatively blackmail them for money.
Step 4: Relax as the election ballot has only 1 name on it and you now have job security for life.
Haven't the banks decided that your voice is your password?
HSBC and Santander have this in place. If this software can include the biometric markers of the voice, which in principle are just the type of inflections and patterns already being replicated, then surely this authentication method is now broken.
They keep asking me if I want to enable it, I wonder if they'll ever get tired of trying?
Oh, and when someone asks me 'why' I don't want their useless insecure twaddle and I just say 'because that is my stated preference' I wish they wouldn't get all passive-agressive/shirty and make out like I'm the one being rude! Cheeky feckers.
“For example, a mom can easily configure an audiobook reader with her own voice to read bedtime stories for her kids when she is not available. "
I am not sure that this is good natured, it's a step down a path that is neither healthy nor particularly ethical.
Abrogating the care and emotional development of your children to AI is not a good natured purpose.
But what do you do with all the "bastards" (the UNlicensed children that will inevitably still issue)? As someone else has noted, it's not like you can forcibly sterilize everyone; too many risks (and no, you can't compare us to pets; the procedure would have to be reversible).
I guess what this points to is the need for a companion field of research of how to detect computer generated voices. Of course it is easy for a human to distinguish the real from the artificial (should we call it CGV (Computer Generated Voice?)) now. I don't know what the limit in quality will be with this particular line of research, but it seems that an honest research would be publishing how to detect it also. A commercial implementation could deliberately water mark the audio, but this will not help out for nefarious implementations.
I'm guessing that there will be algorithmic/forensic methods for detecting this kind of manipulation, even after it is good enough to pass the human ear test.
we can't upgrade our brains.
Speak for yourself. I'm on (roughly) v117.79.3 production release of mine. I have a couple of Beta releases that are testing various thought profiles and belief systems. Alpha releases don't tend to last very long before they are stripped apart for the bits that might be useful which are then added to the next Beta release.
Bosses of Tech companies reading SF as tech blueprints. Why? Their lack of empathy, social responsibility and ethics/morality makes them miss the point of the stories, which isn't to sell shiny or glorify tech for sake of it, but to make social and political comments. It's only in Hollywood (and toy franchises) that it's about the gadgets.
HHGTG was a parody, not a blueprint either!
Roj Blake: It's exactly as though Ensor were speaking.
Orac: Surely it is obvious even to the meanest intelligence that during my development I would naturally become endowed with aspects of my creator's personality.
Kerr Avon: The more endearing aspects by the sound of it.
Orac: Possibly. However similarities between myself and Ensor are entirely superficial. My mental capacity is infinitely greater.
and this, was the Human species, they had egos and thought they were unique.
They thought they would discover more about themselves if they build AI and Robots, they explored humanness, building many variations of both technology, slowly getting better, then a segment of Humans worked out they could actually emulate existing humans, and finally replaced the species.
The excuse was that they were all turning to Zombies anyway, looking at their monitors on phones, watches and tablets, that their life was so undermine by the greedy, such that any electrical/electronic product that produced data (and all devices did) was co-opted to serve other masters incessant desires.
Yet the masters were so preoccupied that they did not notice they were being replaced themselves.
Surprising to humans would be the fact that long after they had died out, Micro-organisms still exist on the Earth, they have existed for billions of years, almost everywhere on land in the planet and life that lived there even traveling thru space and back. They survived, and have mutated and adapted all without brains and intelligence.
I seem to recall Adobe demonstrating what they referred to as Photoshop for voice (jokingly, or I thought they did) that basically did what this does but better? I know they had no finished product for it at the time but I'm surprised it's taken this long for anyone to do the same thing.
Looked it up and it is called Adobe VoCo. The cutting and pasting part of the demo seemed like what I'd expect cutting and pasting an audio sample to sound like. It wasn't natural. The way they generated voice to say what they wanted was a lot more impressive.
This post has been deleted by its author
So those gits who use soundbites responded down the phone in response to cold calls to fake up acceptance of slammer services can up their game a notch and now do really crim things?
I want this tech built into my phone so when I answer it I sound like someone famous like Arnie or That woman for The Weakest Link. That way the crims will be synthesizing a synthesized original that comes nowhere like being me.
Bogus long distance carrier: "This is a recording of you requesting our service and accepting the terms of service"
Me: "Can't be. That sounds like Michael Palin and as you can clearly hear for yourself, I sound like Hilda Baker".
Bank: "Here is a voice recording of you authorising transfer of all your funds to the Bide-a-Wee Home for Indigent Hackers in Grand Cayman"
Me: "Not possible. That voice clearly sounds like Little Jimmy Osmond and as you can clearly hear, I sound like Scooby Doo".
Hardly anybody wanted that. And, after a long career in IT, I quit. It was visible long enough, IT turns into a serious business now. Not because it is billions heavy, but ist is seriously challenged by enemies, not criminals, but enemies. We tried to cope with hardening OSes, Virus detectors, only to find, that these itself may be a primary vector to infect systems. And then we found out, that even the borders we thought being around processes, the theory relied on, weren't there. That is a catastrophy.
Do you see anything adequate to cope with it? I see a lot of whisteling.
In my company we tried for years a conservative approach - deny access if it is not positively known as secure. We had two layers of firewalls, always kept to the highest degree of reliability. Then things changed. EDP was now IT, and access was more important than security. So far this had no visible impact on security, but the change in attitude is foreboding. There are new flanks to be attacked which have to be watched by the same flock of people, and there are no adequate tools to detect trespasses and attacks. We lose.
As soon as I started analysing voice spectrograms 15-20 years ago I realised it was only a matter of time before we'd have the capability to re-synthesise real voices. I'm not sure it really needs "artificial intelligence" - there's a lot of hype there. But with the right parameterisation, it should be possible. And as soon as I heard of banks using voice for security, I immediately thought that whatever parameterisation they're using to make a "voice print", someone else knowing the method could fake it. That's just security-by-obscurity. For any TV or radio presenter or celebrity with a huge corpus of high-quality recorded material, re-creating their voice to say anything you like should be very do-able.
The genie is out of the bottle; it will happen.
To be fair, what comes back out of a mobile phone is more a "re-synthesis" of the original voice, rather than a reproduction. Get over it!
I seem to recall both Wesley Crusher & then Data hijacking the Enterprise D, simply by emulating Picards voice.
Trifle foolish of them not to have come up with improved security by the time Data did it, something like physical location of the comm badge, compared to where the spoken command is coming from.
Or indeed the irony of "There's nothing more useless than a lock with a voice print" turns out to be the key in Doctor Who - The Invasion Of Time.
Biting the hand that feeds IT © 1998–2021