Computer generated training images
Slight problem with self-driving cars that all head toward the spinning Nvidia logo on billboards
The saying "data is the new oil," was reportedly coined by British mathematician and marketing whiz Clive Humby in 2006. Humby's remark rings true more now than ever with the rise of deep learning. Data is the fuel powering modern AI models; without enough of it the performance of these systems will sputter and fail. And like …
But they don't like "Fake it until you make it" since Elizabeth Holmes & Theranos. Especially when you can potentially kill people of give them the wrong medical information because of it.
Still, they never learn, and silicon valley still continues to allow this and invest in bollocks.
As we've seen with some AI in the labs. Just because it works how it was expected in the lab doesn't mean it will behave the same when live. Such as the maze hunters trained on. I'm no expert, I'm going off the Robert Miles videos. But the object where picking up keys and using them to open chests is good. But when put the AI out into the wild the AI ended up just picking up keys only cause there were now more keys than chests, the AI behaviour had changed from when in the test lab. It decided it liked keys more and chests were OK but it loved keys. It could see its own key inventory with one chest left but got stuck trying to pick up the keys in its own inventory.
Robert explains it better than I ever could.
https://youtu.be/zkbPdEHEyEI
"we can generate whatever distribution of ethnicities, ages, genders you want in your data, so we are not biased in any way"
The moment you specify a distribution up front, you implement a bias (whether or not you're smart enough to recognise that), because your specification is based on your prior expectation.
The reason for random sampling from a real population is that you can't have any prior expectation. Statistics 101.
> The reason for random sampling from a real population is that you can't have any prior expectation.
Quite true - but even then you can still have bias, because your "random" sampling protocol is biased (in ways you may be unaware of), or simply because the population you're sampling from has a highly-complex, multi-modal distribution and your sample size is too small.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
It's quite an ambitious goal, given Nvidia's dominance in the space with its GPUs and the CUDA programming model, plus the increasing competition from Intel and several other companies.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
A group of senators wants to make it illegal for data brokers to sell sensitive location and health information of individuals' medical treatment.
A bill filed this week by five senators, led by Senator Elizabeth Warren (D-MA), comes in anticipation the Supreme Court's upcoming ruling that could overturn the 49-year-old Roe v. Wade ruling legalizing access to abortion for women in the US.
The worry is that if the Supreme Court strikes down Roe v. Wade – as is anticipated following the leak in May of a majority draft ruling authored by Justice Samuel Alito – such sensitive data can be used against women.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
China's government has outlined its vision for digital services, expected behavior standards at China's big tech companies, and how China will put data to work everywhere – with president Xi Jinping putting his imprimatur to some of the policies.
Xi's remarks were made in his role as director of China’s Central Comprehensively Deepening Reforms Commission, which met earlier this week. The subsequent communiqué states that at the meeting Xi called for "financial technology platform enterprises to return to their core business" and "support platform enterprises in playing a bigger role in serving the real economy and smoothing positive interplay between domestic and international economic flows."
The remarks outline an attempt to balance Big Tech's desire to create disruptive financial products that challenge monopolies, against efforts to ensure that only licensed and regulated entities offer financial services.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.
Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.
In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Biting the hand that feeds IT © 1998–2022