Seems like a very roundabout and overly complicated way of standardization.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department. That's why on Wednesday the fabless chip designer …
The requirement for users to train a AI network is simply too much for normal users. Basically, the necessary data sets are not available for ordinary people, and thus user's only possibility is to pirate the large databases needed to train their AI.
=> Thus AI is not suitable technology for us.
=> you need people who are more criminal to actually purchase your AI technology
=> I know that didn't stop the tech vendors in previous technology wave, but we've already had enough of technologies that goes against the laws
=> once you invent AI that doesn't need to clone other people's databases for training the network, return back with the refined technology
=> but current state of AI is simply impossible for ordinary people to use legally.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Microsoft's GitHub on Tuesday released its Copilot AI programming assistance tool into the wild after a year-long free technical trial.
And now that GitHub Copilot is generally available, developers will have to start paying for it.
Or most of them will. Verified students and maintainers of popular open-source projects may continue using Copilot at no charge.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Broadcom has made its first public comment in weeks about its plans for VMware, should the surprise $61 billion acquisition proceed as planned, and has prioritized retaining VMware's engineers to preserve the virtualization giant's innovation capabilities.
The outline of Broadcom's plans appeared in a Wednesday blog post by Broadcom Software president Tom Krause.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Amazon at its re:Mars conference in Las Vegas on Thursday announced a preview of an automated programming assistance tool called CodeWhisperer.
Available to those who have obtained an invitation through the AWS IDE Toolkit, a plugin for code editors to assist with writing AWS applications, CodeWhisperer is Amazon's answer to GitHub Copilot, an AI (machine learning-based) code generation extension that entered general availability earlier this week.
In a blog post, Jeff Barr, chief evangelist for AWS, said the goal of CodeWhisperer is to make software developers more productive.
Three people accused of selling pirate software licenses worth more than $88 million have been charged with fraud.
The software in question is built and sold by US-based Avaya, which provides, among other things, a telephone system called IP Office to small and medium-sized businesses. To add phones and enable features such as voicemail, customers buy the necessary software licenses from an Avaya reseller or distributor. These licenses are generated by the vendor, and once installed, the features are activated.
In charges unsealed on Tuesday, it is alleged Brad Pearce, a 46-year-old long-time Avaya customer service worker, used his system administrator access to generate license keys tens of millions of dollars without permission. Each license could sell for $100 to thousands of dollars.
Biting the hand that feeds IT © 1998–2022