More inefficient -> less inefficient?
Semantics
Neuromorphic chips have been endorsed in research showing that they are much more energy efficient at operating large deep learning networks than non-neuromorphic hardware. This may become important as AI adoption increases. The study was carried out by the Institute of Theoretical Computer Science at the Graz University of …
Considering how vastly different a computer is from a neural network, I'd expect far bigger gains to be possible.
I assume that the main reason AI researchers aren't using neuromorphic chips everywhere is that such chips probably aren't very flexible when you want to change your model of how a neuron works, or how it connects to other neurons.
I expect that once those points are reasonably clear, running models on neuromorphic chips should become far more common. Especially if you want to run larger and larger models.
That's the scariest sentence I've read this year, in a year of traumatising sentences. Neuromorphic is a scarier word than thermobaric.
I'm sure you can do it, but dinnae dae it. Just dinnae. Put the development on the back burner for a decade or three, maybe rip-up the master plan.
My laptop is fine, my phone is fine, my connectivity is fine. Stop the bus I want to get off.
Stop the bus I want to get off....... Danny 2
Methinks the chance of that and those other requests being heeded and granted are as likely as if you asked for night not to follow day because of the fact, in the shade and shadows of darkness, virtual virulent preparations for tomorrow allows for all manner of remote changes to practical matters that were thought to be of vital significant importance yesterday to be presented again dressed up in another novel solution for further development researching future daily event production for subsequent generations/iteration, and such is all part and parcel of a Grand Suite of Greater AImighty IntelAIgent Master Plans ......... akin to a veritable Magical Mystery Tour of Helter Skelter Rides.
You'll love it. What's not to like?
If you’d like some further evidence of such as be Advanced Astute Agile Novel NEUKlearer HyperRadioProACTive IT Development[s], the following is pretty indicative of the current state of future progress, and to whom/which type of parties it may very well be of abiding particular peculiar especial interest .......
amanfromMars [2205250943] ...... shares on https://www.nationaldefensemagazine.org/articles/2022/5/25/navy-eyes-next-gen-tech-to-transform-shipyardsThe digital twins technology/methodology, and I would submit it is THE future leading protocol to master with/for remote command and absolute control, and it can be used for any environment, is essentially a creative virtual reality renderer which provides pictures and blueprints of products and services which are being specifically designed by Advanced IntelAIgent Sources and Forces to engage and employ humanity to follow and copy/construct and administer.
However, to imagine and expect it exclusive and only available to Uncle Sam centric forces and home grown sources is not a realistic possibility whenever there be others aware of the programs utility/facilities/strengths with no apparent underlying weaknesses unless or until such be thought to be introduced and manufactured.
Do you know of any other similar development able to present and demonstrate itself as a viable virulent virtualised, easily applied, mentored and monitored master plan ..... for that is what it professes to be capable of supplying and providing for ?
I Kid U Not.
Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies.
Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot used discriminatory or hate speech.
LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google.
IBM's self-sailing Mayflower Autonomous Ship (MAS) has finally crossed the Atlantic albeit more than a year and a half later than planned. Still, congratulations to the team.
That said, MAS missed its target. Instead of arriving in Massachusetts – the US state home to Plymouth Rock where the 17th-century Mayflower landed – the latest in a long list of technical difficulties forced MAS to limp to Halifax in Nova Scotia, Canada. The 2,700-mile (4,400km) journey from Plymouth, UK, came to an end on Sunday.
The 50ft (15m) trimaran is powered by solar energy, with diesel backup, and said to be able to reach a speed of 10 knots (18.5km/h or 11.5mph) using electric motors. This computer-controlled ship is steered by software that takes data in real time from six cameras and 50 sensors. This application was trained using IBM's PowerAI Vision technology and Power servers, we're told.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.
Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.
Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
AI is killing the planet. Wait, no – it's going to save it. According to Hewlett Packard Enterprise VP of AI and HPC Evan Sparks and professor of machine learning Ameet Talwalkar from Carnegie Mellon University, it's not entirely clear just what AI might do for – or to – our home planet.
Speaking at the SixFive Summit this week, the duo discussed one of the more controversial challenges facing AI/ML: the technology's impact on the climate.
"What we've seen over the last few years is that really computationally demanding machine learning technology has become increasingly prominent in the industry," Sparks said. "This has resulted in increasing concerns about the associated rise in energy usage and correlated – not always cleanly – concerns about carbon emissions and carbon footprint of these workloads."
Biting the hand that feeds IT © 1998–2022