They still have people working at Amazon fulfilment centres because they do not have robot that can do some of the work yet.
Contract lawyers are increasingly working under the thumb of facial-recognition software as they continue to work from home during the COVID-19 pandemic. The technology is hit-and-miss, judging from interviews with more than two dozen American attorneys conducted by the Washington Post. To make sure these contract lawyers, who …
"They still have people working at Amazon fulfilment centres because they do not have robot that can do some of the work yet."
Plus people are mostly self maintaining and self repairing to some degree so they are probably cheaper.
it is a sign of things to come that even lawyers are heavily surveilled, possibly more than the bezobots are, this is one of the real dangers of AI, that everyone will be watched and their behaviour analyzed 24/7.
"Your weekly social score comrade, your nose picking and arse scratching is reducing your weekly output by as much as 0.025%. Ten social score points deducted!"
... everyone will be watched and their behaviour analyzed 24/7...
This is seen as completely acceptable when applied to corporate workers but what would we hear if it was applied to politicians? Do you think that Boris, Donald, Priti, and Kamala would be happy if they were monitored at work? I imagine that we'd hear squealing 24/7 that it needs to be banned.
You just don't have to waste money on repairs when they break.
And broken ones haul themselves off to wherever it is that broken labour units get dumped, so you don't end up with a big pile of them rusting on a back lot, and shitty letters from the county about the unsightly mess.
the most important word in your post is "Yet".
Bezos Tas Sellers inc has forgotten that the more humans displaced by robots the less economic activity those said humans will be able to partake in. No money to buy tat from Bezos ! Good. May the Tas Sellers go out of business right now and I'll cheer their demise.
If you work at said Tat sellers then beware... You will be replaced sooner rather than later. Just don't forget that.
"between 0120am until 12pm"
What an odd shift pattern! Everywhere I've ever worked used multiple of 30 mins for shift patterns, most commonly 8hrs or 7hrs 30 mins. I know night shifts are often longer but 11hrs 20 mins seems strange, especially with Amazons obsession with productivity and monitoring and the known fact that night shifts are usually less productive than day shifts.
I find Amazon so frustrating. Sometimes I'm willing to pay double and wait a week longer for an item and I can still only find it on Amazon. Why can't other retailers get their act together? Amazon wouldn't have got this dominance if almost every single other retailer didn't try to hide their delivery charges ("Free delivery over £xx" was the most common and then a 20 click process to find how much delivery would cost if I wasn't buying the entire shop) - but now to beat them they need to club together and offer something really worthwhile.
I don't think we'll see another marketplace with shipping as cheap and fast as Amazon's. Their logistics are a real behemoth.
That said, I think there is room for a marketplace that does nothing more than aggregate retailers, and doesn't handle shipping, leaving it to individual retailers. Very often, I end up buying something on Amazon simply because I don't have time to visit each retailer's web front, figure out how to search it, whether it carries the product, whether it ships to me, how much it costs, etc etc, and do it for dozens of websites.
A site that does indexing, searching and transactions, but not logistics, would go a long way towards letting me move away from Amazon. It would be both more expensive and slower, but at least it would be a usable alternative, for those of us who object to Amazon on ethical grounds.
The same "UK sellers" that operate out of Portsmouth or Southampton, make a big noise about fast delivery, yet anything bought from them takes about a week to arrive?
The same "UK sellers" who repsond to complaints and questions with terrible English, and always at 4 in the morning? (Sadly I accept that terrible English alone covers a lot of people genuinely in the UK).
You have reviews. And it tells you where they are shipping from so you can avoid Pompei or Soton sellers.
But I've never had any problems. Some, I admit, look suspiciously like warehouses. But they've shipped promptly and don't arrive via obvious international mail
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.
Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.
Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
In brief Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US.
The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders.
"Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.
Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies.
Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot used discriminatory or hate speech.
LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google.
GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.
His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.
“The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Biting the hand that feeds IT © 1998–2022