"the company believes its technology is beneficial despite risks"
They don't say to who so I'll clarify.
It's beneficial to the execs and investors.
It's just risky to the rest of us.
Clearview AI is reportedly expanding its facial-recognition services beyond law enforcement to include private industries, such as banking and education, amid mounting pressure from regulators, Big Tech, and privacy campaigners. The New York-based startup's gigantic database contains more than 20 billion photos scraped from …
"The potential of facial recognition technology to make our communities safer and commerce secure is just beginning to be realized" - Yes, and do you all remember how "digital technologies" made the same promises at the start of the century. And yet, here we are, some twenty years later, with consequentially far higher and ever increasing levels of financial/identity fraud and scamming on-line! Where software patches and updates are rolled out on a near daily basis!
No, Clearview (et al), I think that you, and others like you, simply want to protect businesses, their data sets and their profitability. I don't believe that this offers any real benefit to customers, but you will no doubt pull off the same illusory trick, so often used before, to fool everyone into letting you harvest all that lovely personal data! Only then to become evil gate-keepers of our individual daily drudgery.
"Italian regulators have fined the biz millions of dollars and Canadian watchdogs have banned its public agencies from contracting with the company."
"The UK's Information Commissioner's Office issued a £7.5 million ($9.43 million) fine for violating the country's data privacy laws"
"the company believes its technology is beneficial despite risks of misidentification or issues of data privacy and security. "Facial recognition can be used to help prevent identity theft and fraud."
So breaking the law is OK if it supports law enforcement?
Fines are just the cost of doing business, and that's because the size of the fine does not increase exponentially with repeats.
I would favor a system where, the first time you are fined for a given problem, you get the standard amount to pay. If you are fined again for the same thing (for a relative value of same), the fine is automatically doubled, and so on and so forth.
With that system, the cost of doing business would soon become prohibitive, and slimy gits like Clearview's boss would just have to bow before authority.
Clearview promised to stop giving or selling access to its database system to most private companies and organizations across the US. Public agencies and law enforcement, however, can still use its large database. Private sector businesses, instead, can only use data they provide to the company's facial-recognition software; ie, they have to provide their own database of photos. Clearview is also not allowed to use that data to add to its database.
I don’t believe a word of that.
I hate censorship but these guys are building systems to censor ordinary citizens without citizens being able to challenge them.
"Photographs, illustrations and other images will generally be protected by copyright as artistic works. This means that a user will usually need the permission of the copyright owner(s) if they want to perform certain acts, such as copying the image or sharing it on the internet."
Source: UK Gov Copyright
"Personality Rights. No image or representation of your appearance or voice can be used to promote an organization, product or cause without your permission. "
Source: US Copyright
So basicaly Clearview are absolute scum and should not be tolerated. Sadly we know that a ton of Government PHBs will be salivating at the prospect of all that cash in exchange for selling our images to that piece of shit company that kids itself and its shareholders that it's somehow progressing humanity and security.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.
Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.
In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.
Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.
Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
Biting the hand that feeds IT © 1998–2022