
So Arasaka should have beern called Amazon all along. This is Relic 1.0.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones. Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was …
"AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones."
123 years ago Arthur Conan Doyle wrote The Japanned Box (Strand Magazine, January 1899) in which a widower regularly plays a phonograph recording of his deceased wife's voice. The difference, though, is that it was her real voice.
Wrong in this case. Most speech systems do use recordings of human speech, but usually it's hours of painstakingly recorded samples, thoroughly dissected by lots of manually-written analysis software, and then reassembled with more rule-based algorithms. This automates a lot of the process and, as the article says, means that the person whose voice you want to copy doesn't need to sit in a recording booth for perfect recording quality and read a prewritten script, as the software can use a shorter recording not intended for that purpose. That's a different method of obtaining the same goal and it does have differences for the resultant quality and ease for the user.
Compare this to The Sixth Day (basically a terrible movie), in which Arnie is encouraged to purchase a clone dog to shield is child from the reality of death. While the film totally failed to explore this idea with any depth, the fact is this is a real issue that deserves deep public discussion. As we become more and more able to shield people from the painful things in life, one has to seriously investigate the effect on long-term mental health and cultural values.
I fear a world where the rose seemingly has no thorns.
There's so much potential for manipulation here, and not just the obvious political shit.
Running the "Grandma scam" (calling senior citizens and pretending to be their grandchild who's in trouble and needs money) using the person's actual voice.
Luring a child into a car by playing back a parent's voice on a fake speakerphone call.
Cops using a suspect's voice to place a fake 911 call to create a pretext for an illegal search.
Blackmail.
Manipulating people with cognitive issues into giving up banking info.
Harassing and bullying people by using their loved ones' voices.
All of the above will happen. This timeline sucks.
you pat yourself for havng FINALLY streamlined Grandma out of your home budget, originally outsourced to a 'Golden Sunset Prospect', aka 'quick retirement' home, and then got an Xmas special, so now Granny-box sits happily by Alexa.
A quality Indian call center / scam operation talks colloquial English (or American) at their target. They work at it, they're really good, but ultimately they lack the immediacy of context that reveals their true identity. (This, I believe, is one version of a CAPCHA.)
Grandma's voice may be comforting, even something worth treasuring, but to make really useful needs the voice to be attached to an ersatz consciousness which is able to interact and adapt to contemporary life.
I could use an accent translator that would let me understand the assorted flavors of voice at help centers. If they could be turned into American Midwestern, they'd be one hell of a lot more useful. But as many have warned: fake voices could be dangerous in many ways. The useful is outweighed by the hazardous.
Amazon at its re:Mars conference in Las Vegas on Thursday announced a preview of an automated programming assistance tool called CodeWhisperer.
Available to those who have obtained an invitation through the AWS IDE Toolkit, a plugin for code editors to assist with writing AWS applications, CodeWhisperer is Amazon's answer to GitHub Copilot, an AI (machine learning-based) code generation extension that entered general availability earlier this week.
In a blog post, Jeff Barr, chief evangelist for AWS, said the goal of CodeWhisperer is to make software developers more productive.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
The US FBI issued a warning on Tuesday that it was has received increasing numbers of complaints relating to the use of deepfake videos during interviews for tech jobs that involve access to sensitive systems and information.
The deepfake videos include a video image or recording convincingly manipulated to misrepresent someone as the "applicant" for jobs that can be performed remotely. The Bureau reports the scam has been tried on jobs for developers, "database, and software-related job functions". Some of the targeted jobs required access to customers' personal information, financial data, large databases and/or proprietary information.
"In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," said the FBI in a public service announcement.
Jeff Bezos once believed that Amazon's low-skill worker churn was a good thing as a long-term workforce would mean a "march to mediocrity." He may have to eat his words if an internal memo is accurate.
First reported by Recode, the company's 2021 research rather bluntly says: "If we continue business as usual, Amazon will deplete the available labor supply in the US network by 2024."
Some locations will be hit much earlier, with the Phoenix metro area in Arizona expected to exhaust its available labor pool by the end of 2021. The Inland Empire region of California could reach breaking point by the close of this year, according to the research.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
It's quite an ambitious goal, given Nvidia's dominance in the space with its GPUs and the CUDA programming model, plus the increasing competition from Intel and several other companies.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
Biting the hand that feeds IT © 1998–2022