
The suggestion by Google's "AI Overviews" to eat one rock a day for a health reasons was actually lifted directly from an article in the Onion.
Press freedom advocates are urging Apple to ditch an "immature" generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself. Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which …
The suggestion by Google's "AI Overviews" to eat one rock a day for a health reasons was actually lifted directly from an article in the Onion.
If Apple are allowed to push untrue content from their lying AI without any consequences everyone should be allowed to push untrue content from their lying BS generators without consequences
if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi.
Taking the moral high ground won't save us from this sinking ship of shit so we might as well help it hit rock bottom as soon as possible.
Here's some more fake headlines :
"Tim Cook dead at age 56"
"Apple to give away free Iphones for christmas at all Apple stores !"
"Personal details of millions of Iphone users leaked on the dark web"
"Tim Cook stepping down as CEO of Apple"
"Tim Cook to legally change his name to Tim Apple"
Apple have forgotten there's no Section 230 protecting this. This is all on them, it's Apple which is publishing incorrect and inaccurate summaries of articles by other organisations which could bring them into disrepute. I wonder if the BBC could reach for the UK's famously restrictive libel laws.
"if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi."
Oh yeah, it's all true. I was just watching Youtube video the other day that Trump and his MAGAs are all repressed gays and rednecks have guns on the rack in their huge pickups because they all have tiny dicks. It MUST be true!!!
Drumpf and company are too indiscriminate and tasteless to be gay. No self respecting gay individual would be caught dead with Musk or Drumpf's hair!
No, for that crowd, it's "any orifice, any where, any time", I'm afraid. They have neither shame nor standards
I stopped by a well known burger flinger on the way home from work because it's been a bit of a shit day and I just want to stare at the wall rather than get up off my arse and cook stuff.
Arguably the cheese is some sort of yellow glue, and to be honest I'm thinking there might be more nutritional value in rocks. At least the chips were hot for a change. Hot chips are nice, cold chips (the usual kind) are a sort of grim that would be a torture in hell (you can have all the chips you want in the afterlife, but they're the cold congealed manky ones that should have been thrown out half an hour ago...).
This post has been deleted by its author
I prefer to imagine and do work within everything treating it as a black hole ..... sucking in anything and spewing it out elsewhere all jumbled up as something quite different engaging and entertaining or disturbing and terrifying dependent upon one’s future suspected worth ...... although who/what makes that decision is surely still a riddle, wrapped in a mystery, inside an enigma; without a currently known or readily available master key.
Why should they comment? Other than to send a polite "Thank you" to the BBC for being so complimentary about Apple's services.
At least, that is what the Apple PR flacks believe, after they read the Apple Intelligence summary of the Beeb's communication (hey, those are busy and important flacks, they don't have time to read it all themselves; those lunches won't eat themselves).
It's not an accident. They didn't accidentally roll out a headline summary bot. They didn't accidentally fail to verify that an LLM was the right tool for the job. They didn't accidentally fail to check the bullshit output before publishing it.
This post has been deleted by its author
This post has been deleted by its author
It doesn't pertain to this article directly, but I have found AI to just interfere with my work flow. I suddenly couldn't find emails, and it kept interrupting while I was composing messages.
The thing with the current LLMs is that they give very confident answers, which may not may not be right. They will never respond "I don't know" or "I think this is right but you may want to fact check it". They don't show you their workings.
Thus, I think they are pretty dangerous for anything which depends on factual accuracy.
This post has been deleted by its author
This isn't an AI problem. Editors in news media have been creating "click-bait" headlines for shock value since long before the days of the Internet. How often have you picked up a newspaper or read an article from a mainstream news source where the headline contradicted the article that followed it? The only thing here is that computers are doing it faster, putting hard-working editors out of work.
This post has been deleted by its author
Bad headlines are nothing new, but these are probably still worse. Editors may pick headlines intended to mislead, that make bad summaries, or ones designed to make the article sound more interesting than it is, but they tend not to write headlines that diametrically oppose the article unless they didn't read it or confused one article with another. This bot did that all on its own with no pressure causing it to do so. That's likely to happen a lot more frequently than an editor making such a massive mistake.
I've always wondered why they don't have the article authors write the headlines. At least in non-clickbaity examples, that should get an accurate summary in there. I suppose we've now solved that problem. The article and headline will be written by the same thing: an AI bot that made both of them up.
Click bait headline or not, the AI was supposed to summarize the actual article, to save you having to read it all yourself. If it made up an extended version of the click bait, then it failed at it's supposed task, which was the actual article content. It's sort opposite of a "useful" tool at that point, as now you have to actually read the article to double check what the "AI" told you about it.
It's a basic problem with current AI, it creates a "word salad" by association of words and phrases. It creates plausible sounding sentences, but has zero actual understanding of the actual Real World.
The Glue and Pizza possibly came about when it came across text that says, "Using mozzarella helps glue the other ingredients to the base". Now it associates the words "glue" with "pizza" when it mixes up it's word salad. Not very intelligent.
The only thing here is that computers are doing it faster, putting hard-working editors out of work.
This is the problem with the MSM, and the challenges they face from alternative media. 'AI' editors can grammar and spell check articles, but can't reliably 'fact check' them. 'AI' journalists can ingest stories from wire services and massage them into stories, but can't 'fact check' them either. And 'AI' can't do one of the important things journalists should be doing, ie investigative journalism. But that takes time and money, so human journalists struggle to do that anyway. Governments are probably just fine with this, and journalists not being able to hold their feet to the fire.. And if they do, their stories can just be dismissed as 'fake news' if they contradict the official misinformation.