by unauditable unaccountable Algorithm
At least 100 NHS trusts in England are to start using machine-learning software to predict the number of patients expected to be admitted to Accident and Emergency departments each day. The tool, built by British startup Faculty, aims to help managers figure out how best to allocate staff and resources during predicted surges …
I think the more important point would be that unless the hospital has enough funding to actually provide more beds, and the staff to go with them, knowing how many people are going to turn up is irrelevant. Waiting lists and ambulance response times aren't at record levels because hospitals aren't quite good enough at predicting A&E arrivals.
It's complex stats, and predicting the near future; easy to validate it's effectivness before letting it influence decisions.
Run it for a bit, and see how many people you could have helped but didn't (or if not, don't use it)
This strikes me as a rare good use of what we're are calling AI this year.
Weather prediction models these days are astoundingly good compare to when I was a kid and there are some very big computers making that happen.
It's not complex. It's already being done as attendances are already very predictable. Analysts are already including things like weather when looking at A&E activity. Yes it is a good thing to be doing, but it is not new and doesn't need money funnelling to corrupt, fraudulent companies.
We have years of data on A&E admissions, why does it need funky AI software to analyse it. As you say, the peaks and troughs are already well understoood.
This has all the hallmarks of smart salespeople bamboozling manglement with hype making them think that this will somehow be so much better,...
I guess you can predict it up to a point:
Like for example there are more stabbings on Friday and Saturday night than at other times, and there are more stabbings after big football matches.
And yes, if more people call 111, a certain proportion of those might end up in hospital. Same with a forecasted heatwave or cold-snap.
But what about the next Grenfell, or the next Dunblane Massacre, or the next Ladbroke Grove rail crash, or the next Manchester Arena bombing?
Those sorts of things lead to a lot of people needing hospital treatment very urgently. And while they were all predictable, they weren't predicted by anyone who was in a position to do anything about it, and this thing won't be able to predict them because it won't have the relevant datapoints to do so, and they don't happen frequently enough to be able to identify any patterns.
... overprovision the room, beds and staff a bit, and in particular let the staff have some downtime, R&R, look after their own health and not feel like they're running off their tits for 14 hour shifts.
I'm absolutely convinced the cost will be about the same, or perhaps in the (very big) round, less, and the department will be able to cope with the peaks when they come just by having slack.
The obsession with JIT is pretty flaky in industry. It doesn't need to be the case in things like healthcare.
A tenner says their model breaks in a post-covid UK (provided we get to one).
We've got an AI model that's AMAZING at predicting hospital A&E admissions 2-3 weeks out, trained on data from the past couple of years!
2-3 weeks eh? Isn't that about the average period between a positive covid test and A&E admission?
I wonder what the strongest explanatory variable in their model is? Another tenner says it's positive covid tests.
Waste of money. Every A and E I've ever heard of, worked in, visited and been in does not have the capacity to cope.
Often in the management meetings, when , at the weekends, there were no beds, and critical incidents were called to free up money for more agency staff.
This is not a solution for now, but yes, certainly in the future, after more hospitals are built and there is real capacity, not just people waiting to go home sitting in a chair for hours, or waiting for an ambulance to take them home because they can't use a taxi.
It is inhuman some of the things I've seen hospital management do to patients.
Another utter shambles from NHS bosses at the National level, not individual trusts.
Trusts, there's another story...
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.
Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.
Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies.
Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot used discriminatory or hate speech.
LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google.
The UK's National Health Service (NHS) has committed to implementing electronic health records for all hospitals and community practices by 2025, backed by £2 billion (c $2.4 billion) in funding.
The investment from one of the world's largest healthcare providers follows Oracle founder Larry Ellison's promise to create "unified national health records" in the US after the company paid $28.3 billion for Cerner, an American health software company also at the heart of many NHS record systems.
In the UK, health secretary Sajid Javid has promised £2 billion to digitize the NHS in England, including electronic health records in all NHS trusts (hospitals or other healthcare providers) by March 2025.
GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.
His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.
“The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Biting the hand that feeds IT © 1998–2022