Turn it Off, or don't enable it from the start.
It's not that hard. Why are people so hell bend on being just... stupid and lazy.
Press freedom advocates are urging Apple to ditch an "immature" generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself. Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which …
That's fair for Apple's interest in sending random news headlines to iPhones that never asked for it. Those were often irrelevant, but at least they were true. Users can turn those off when they realize how they don't need them.
Making up headlines and sending those is much worse, and unhappiness with Apple is completely justified, especially for the reporters given credit for the false statements. Consider your reaction if you were working in an office and the person next to you frequently shouted inaccurate statements at you about things you needed to check. Yes, you could and would eventually ignore them, but it wouldn't be your fault if you wasted your time checking out one of those inaccurate statements, the problems caused by any reaction would be their fault, and someone in management might well tell them to stop doing that. We can't make Apple stop doing that, but we have lots of reasons to want them to. Asking users to turn the thing off is not good enough.
"Their view is that the AI koolaid is perfect."
AI generated content is all about cheap, not at all about content. It's so cheap that the AI koolaid doesn't need to be perfect or even potable, just flowing. And TBH they may get paid more on the incorrect stuff as more folks do the "that can't be right" click-through on the obviously wrong ones. (At least until everyone realizes they can indeed be wrong and then the whole news headline service loses share. At which point they reset and rebrand.)
Why are people so hell bend on being just... stupid and lazy.
Evolution. Thinking requires energy. We will instinctively take whatever shortcut we can to form a conclusion. And once we've reached a conclusion we are very reluctant to change it.
It's not a bad way to solve the problems that the universe throws at us but it's more suited to a simpler life. In today's complex societies we need something better.
The losers are the ones who watch their shitty videos all day instead of having a life
Loser influencer, influencer of losers, is how I read it.
How would you like to lose your money today? Buy my shitcoin! Too new for you? How about an old-fashioned pump-and-dump! Got no money left? Buy this overpriced energy drink it won't help you to get a job! .. No money at all?? Ok just watch some more ads then
It isn't up to users to turn it off.
Almost all this kind of so called AI needs scrapped.
It's dishonestly promoted.
It's producing garbage when not plagiarising.
The environmental cost is too high.
Make it illegal, fine companies 10% of annual turnover if they deploy it,
Nuke it from orbit!!
Oh no, the AI method of neurological networks and their current implementation is fine. The problem arises from applying it to things where they don't work and ignoring that they don't work.
A huge amount of AI is used in material research, component optimizing and so on, saving time and money there - and actually saving energy too in the end. But that does make it to the news, and is not abuse-able for marketing "Yay Notepad Need AI Too Yay!"
They demonstrably don't work in some situations. In that case how do you determine the boundary between those areas where they do work and those where they don't? If you have a hundred or a thousand instances of a system working correctly how can you be certain the next one - or hundred or thousand will still be OK?
This post has been deleted by its author
Theoretically, that would be a benefit. Theoretically, it should be easier to fix an error in something than to make it from scratch. In my experience, that's not how it works. Correcting a bad document into a good one often takes longer than writing a good document in the first place, not even counting any time spent to make the bad document. The theory is only correct when I'm correcting a pretty good document by fixing small grammatical or factual errors.
When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. That doesn't result in less effort expended, because you're multiplying the time it takes to check every fact alleged by the document by the number of times you generate something new, plus any time you need to make manual edits. The further problem is that, if someone decides to skip one of those stages, the document still looks like it is complete, but now it's the kind of thing that results in summary judgements against you from judges annoyed that you're making them fact check your submissions.
"When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. "
This is true. A general purpose LLM will do that a fair bit and no right-minded soul would trust it. But a more focus RAG-driven one used reasonably with final proofing is powerful. The more you do on it the better. The paralegal had so much training data from their archive that their Llama tuned itself in a few days.
"The theory is only correct when I'm correcting a pretty good document by fixing ... factual errors."
It goes beyond that, although those account for a lot! It finds contract law stuff that you would never find yourself. And it works a dream in property purchase for those reems of paperwork. A archive-tuned LLM with RAG, that is.
There aren't secret or copyrighted laws in these cases. There can be exceptions, for example where a law mandates a standard and ISO won't give you the standard without payment, but most cases don't involve that kind of thing so we can ignore them for now. The problem is that, even when you scrape all the laws and feed them into an LLM, they can easily mistake things the way they mistake lots of other things. A law means you are allowed to conduct a certain action, and you are sued for conducting that action, sounds like a match. Except the LLM has not noticed that the law allows you to conduct that action if you are a law enforcement officer in active duty following a disruption to communication caused by a serious natural disaster or terrorist attack, but that only appeared once in the training data so the LLM didn't recognize that you're none of those things.
Best case: a lawyer, paralegal, or other legal person reads the produced document. They weren't aware of the law, so they look it up. In the summary, they realize it doesn't apply to you. They throw out the document and start again. Maybe the LLM will produce something correct the next time. Result: the time to generate the original document and the time to review it for errors is lost.
Average case: A lawyer hands the document to a paralegal and says "check this". The paralegal reads the document and finds the reference to a law. They spend a while reading the text of that law to confirm that, even though the summary seems to limit it, the LLM which is supposed to be the next great thing may have found a cool loophole which will get this client off. They spend several hours checking this to realize that it doesn't help. They report their problems to the lawyer. The lawyer sends the report to the prompt generator. The prompt generator makes a new document and the process repeats. Result: several hours added to your legal bill.
Worst case for now: The lawyer hands the document to a paralegal and says "check this". The paralegal sees that a law is mentioned and sees the quote that the action is allowed. They check that the law exists, and it does. They check whether the quote is in there, and it is. They send the document back approved. Result: "Guilty. We are also considering contempt of court charges for council for the defendant."
Law firms trust AI? After some recent scandals involving AI "checked" case law, complete with plaintifs and outcomes, Im suprised anyone would let an AI anywhere near a law firm
https://blog.burges-salmon.com/post/102ivgu/a-cautionary-tale-of-using-ai-in-law-uk-case-finds-that-ai-generated-fake-case-l
https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ <- same the cnbc report
Those cases are rightly held up as a warning. They are pretty dumb examples though.
Here is one: A charity submits legal documents against a defendant. A small technical error, which is very easy to do and miss even when checked by a few people, that error gets picked up by Llama. The charity have time before court to resubmit their action and not lose 6-8 weeks for a new action to be raised after the error gets pointed out by the defendants solicitor at hearing. That happened 3 days ago. The charity would have lost thousands more pounds than they already have.
That action alone easily paid for the 700£ it costs to set up a local Llama.You cant argue with coin.
If you are talking money, then check everything 10 times anyway.
Well-heeled chambers use them. Llama. They have so much archive for training the AI it is very reliable after checks and cross-checks, etc.
That makes the assumption that the lawyers for the charity concerned had checked the documents fully and properly and had missed the error. Once you start to rely on Llama to pick up small errors, it is but a short step to using to find the larger errors too, and not actually have the lawyers themselves check the documents properly to start with.
What then happens when Llama makes an error, or misses something (which judging from the number of mistakes that these systems do seem to make in other scenarios, is inevitably going to happen at some point).
I would be disinclined to use any legal practice that starts to rely on these systems in their current state.
Oh, and if I did, don't even think of billing me several hundred pounds an hour for machine generated output ;)
On the other hand, as these sorts of system develop, become more accurate, more reliable, less costly to run and personal computer capacity and ability improves, it will perhaps become possible to spend a relatively small sum on a PC level version to do the same job, and cut the expensive lawyers out of the loop altogether ;)
There's been software to check cites for decades that's saved mountains of costs over those years. Automation of the rote tasks of law has always been a high priority because of the expense and importance of the work, so turning the next generation of tools on the problem is pretty much automatic. The expansion of AI into the next layer of abstraction is arriving now, is immature, still requires an extra set of eyes, but is improving constantly. Well-trained and well-maintained LLMs can be hugely important just as simplistic hack jobs are hugely problematic. Blanket rejection of use of these tools is just as ignorant as blanket reliance on them.
Lets be honest even without leveraging AI in any form, we could easily fire at least half of all office workers and it wouldnt make a difference.
Most people at work dont actually help or add value in any form, they are simply there because the system is broken and continues to allow fakes to exist.
I would presume that part of the prompt would be "for submission to the clerk of the xxxxx court."
I suggest that all LLM-written complaints/charges should be handled in such a way that if the case is dismissed because of grevious errors, that it has to be dismissed with prejudice.
Post that shit AC as much as you like, but you are basically saying "I am ashamed of this opinion".
That shame serves a purpose. It's to make you think about what you are saying and why you are saying it.
Be brave and post pro-AC stuff with your handle so we can see all your posts together and find out whether you genuinely believe this bollocks (in which case you haven't been following how well AI deals with legal processes) or are just trolling us.
Executives responsible ?
Is this a joke ?
WHen. has an executive every been responsible. THe executives at B signed off on changes that killed hundreds of people and they got paid tens f millions ...
Oh yes another example of corporate mindspeak that is completely the opposite of what the word actually means.
Responsible pairs well with Family and Culture in corporte bullshit lingo.
The real dishonesty comes from not knowing the models being used behind these services. Transparency is required...
There are lots of services out there claiming to provide access to 405b parameter models (and larger) but when you test them out, they provide results that are a bit sus.
It just doesn't make sense for any business to give customers cheap access to a massive model...given the amount of RAM required it just doesn't make sense.
I don't agree that AI tech is entirely crap, because what I've seen running on dedicated hardware using absolutely huge models is astounding and incredibly accurate with very few absurd hallucinations...however, the smaller you get with models, the whackier it gets...the intersection at the moment for "cheap" models is around 8b-12b parameters...those regularly hallucinate and spit out crap...I believe these are what is widely used and we need more transparency on it because I think a lot of people are being conned.
It also hands ammo to the old grey beard naysayers that have been against everything new since Windows 95.
LLMs keep demonstrating that summarising is their weak spot. They can shorten but, because they are inherently stupid and they have no idea what they are doing, these "AI" implementations are unable to distinguish the important from the not important. And that's key for summarising.
When ChatGPT summarises, it actually does nothing of the kind.
AI worse than humans in every way at summarising information, government trial finds
And yet this is one of the key "applications" AI is being sold on.
My own view is that if you ask someone (you employ) to summarise something, you are expecting them to read and understand it. Then write a summary.
If they are just using AI to do it, then why are they being paid as they are clearly neither reading nor understanding the material they've been asked to summarise - in other words not doing the job they've been asked to do.
The suggestion by Google's "AI Overviews" to eat one rock a day for a health reasons was actually lifted directly from an article in the Onion.
If Apple are allowed to push untrue content from their lying AI without any consequences everyone should be allowed to push untrue content from their lying BS generators without consequences
if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi.
Taking the moral high ground won't save us from this sinking ship of shit so we might as well help it hit rock bottom as soon as possible.
Here's some more fake headlines :
"Tim Cook dead at age 56"
"Apple to give away free Iphones for christmas at all Apple stores !"
"Personal details of millions of Iphone users leaked on the dark web"
"Tim Cook stepping down as CEO of Apple"
"Tim Cook to legally change his name to Tim Apple"
Apple have forgotten there's no Section 230 protecting this. This is all on them, it's Apple which is publishing incorrect and inaccurate summaries of articles by other organisations which could bring them into disrepute. I wonder if the BBC could reach for the UK's famously restrictive libel laws.
"if, in this post truth world, lies are free speech, and we have to put up with lies as alternative facts, we should embrace that, produce our own, take it to the extreme. It costs almost nothing compared to building an actual AI, it can be a few lines of python code, can run on a raspberry pi."
Oh yeah, it's all true. I was just watching Youtube video the other day that Trump and his MAGAs are all repressed gays and rednecks have guns on the rack in their huge pickups because they all have tiny dicks. It MUST be true!!!
Drumpf and company are too indiscriminate and tasteless to be gay. No self respecting gay individual would be caught dead with Musk or Drumpf's hair!
No, for that crowd, it's "any orifice, any where, any time", I'm afraid. They have neither shame nor standards
I stopped by a well known burger flinger on the way home from work because it's been a bit of a shit day and I just want to stare at the wall rather than get up off my arse and cook stuff.
Arguably the cheese is some sort of yellow glue, and to be honest I'm thinking there might be more nutritional value in rocks. At least the chips were hot for a change. Hot chips are nice, cold chips (the usual kind) are a sort of grim that would be a torture in hell (you can have all the chips you want in the afterlife, but they're the cold congealed manky ones that should have been thrown out half an hour ago...).
This post has been deleted by its author
I prefer to imagine and do work within everything treating it as a black hole ..... sucking in anything and spewing it out elsewhere all jumbled up as something quite different engaging and entertaining or disturbing and terrifying dependent upon one’s future suspected worth ...... although who/what makes that decision is surely still a riddle, wrapped in a mystery, inside an enigma; without a currently known or readily available master key.
Why should they comment? Other than to send a polite "Thank you" to the BBC for being so complimentary about Apple's services.
At least, that is what the Apple PR flacks believe, after they read the Apple Intelligence summary of the Beeb's communication (hey, those are busy and important flacks, they don't have time to read it all themselves; those lunches won't eat themselves).
It's not an accident. They didn't accidentally roll out a headline summary bot. They didn't accidentally fail to verify that an LLM was the right tool for the job. They didn't accidentally fail to check the bullshit output before publishing it.
This post has been deleted by its author
This post has been deleted by its author
It doesn't pertain to this article directly, but I have found AI to just interfere with my work flow. I suddenly couldn't find emails, and it kept interrupting while I was composing messages.
The thing with the current LLMs is that they give very confident answers, which may not may not be right. They will never respond "I don't know" or "I think this is right but you may want to fact check it". They don't show you their workings.
Thus, I think they are pretty dangerous for anything which depends on factual accuracy.
This post has been deleted by its author
This isn't an AI problem. Editors in news media have been creating "click-bait" headlines for shock value since long before the days of the Internet. How often have you picked up a newspaper or read an article from a mainstream news source where the headline contradicted the article that followed it? The only thing here is that computers are doing it faster, putting hard-working editors out of work.
This post has been deleted by its author
Bad headlines are nothing new, but these are probably still worse. Editors may pick headlines intended to mislead, that make bad summaries, or ones designed to make the article sound more interesting than it is, but they tend not to write headlines that diametrically oppose the article unless they didn't read it or confused one article with another. This bot did that all on its own with no pressure causing it to do so. That's likely to happen a lot more frequently than an editor making such a massive mistake.
I've always wondered why they don't have the article authors write the headlines. At least in non-clickbaity examples, that should get an accurate summary in there. I suppose we've now solved that problem. The article and headline will be written by the same thing: an AI bot that made both of them up.
Click bait headline or not, the AI was supposed to summarize the actual article, to save you having to read it all yourself. If it made up an extended version of the click bait, then it failed at it's supposed task, which was the actual article content. It's sort opposite of a "useful" tool at that point, as now you have to actually read the article to double check what the "AI" told you about it.
It's a basic problem with current AI, it creates a "word salad" by association of words and phrases. It creates plausible sounding sentences, but has zero actual understanding of the actual Real World.
The Glue and Pizza possibly came about when it came across text that says, "Using mozzarella helps glue the other ingredients to the base". Now it associates the words "glue" with "pizza" when it mixes up it's word salad. Not very intelligent.
The only thing here is that computers are doing it faster, putting hard-working editors out of work.
This is the problem with the MSM, and the challenges they face from alternative media. 'AI' editors can grammar and spell check articles, but can't reliably 'fact check' them. 'AI' journalists can ingest stories from wire services and massage them into stories, but can't 'fact check' them either. And 'AI' can't do one of the important things journalists should be doing, ie investigative journalism. But that takes time and money, so human journalists struggle to do that anyway. Governments are probably just fine with this, and journalists not being able to hold their feet to the fire.. And if they do, their stories can just be dismissed as 'fake news' if they contradict the official misinformation.
Apple AI cocked up the sumamry of a BBC news item? Given the appalling sentence [mis]construction and grammar in some recent BBC articles, I m not sure if the BBC isnt using AI generated content in the first place. Its either that, or many of their "journalists" left school with poor English grammar comprehension.
AI is perfectly capable of producing absolute nonsense with perfect sentence construction and beautiful grammar. .... gnasher729
That surely makes AI positively human-like, gnasher729 ....... which is distressing, is it not?
Quite obviously would that AI be a work in progress requiring considerably more work to render improvements to performance rather than have developments copying and having to cope with failures and exploitable vulnerabilities inherent in humans/sub-optimal subjects.
As an IT professional I diustrust any new technology that has not had extensive testing backed by provenance. AI is right up there with the worst technology possible.
I had to give a talk on a few astronomy issues to my local amateur group. Running out of time I asked ChatGPT to summarise the research paper and give me a 2 minute summary that I could read out. And it did. It was very readable. And totally wrong. It was only when I scanned the text that I realised that the original paper was based on Hubble imagery, whereas the ChatGPT version calimed that ALMA, the radio telescope in Chile, was responsible. This isn't a simple mistake - it is out and out fabictaion. Nowhere in the paper did it reference ALMA at all. I actually called one of the UK researchers (as a fellow of the Royal Astronomy Society I can do this) and he went from disbelief, through astonishment to anger - his work of several years was being reported incorrectly.
After that I asked ChatGPT how many books or papers I was responsible for. It gave me a nice list of books. None of them anything to do with me, but it had added my name to the accredite authors. It would have been nice to get the publication payments!
My real worry is when I see AI being used for medical diagnostics. Can we trust it to make the correct diagnosis? Well, we know that we can't. A skilled professional has to validate each positive diagnosis and check the findings. But what about the false negatives - when AI tells the clinician that there no sign of cancer, and that message is just repeated verbatim to the patient. If the patient later dies of cancer, do their heirs have a case to make against the clinician? The AI? The people who built the AI?