Turn it Off, or don't enable it from the start.
It's not that hard. Why are people so hell bend on being just... stupid and lazy.
Press freedom advocates are urging Apple to ditch an "immature" generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself. Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which …
That's fair for Apple's interest in sending random news headlines to iPhones that never asked for it. Those were often irrelevant, but at least they were true. Users can turn those off when they realize how they don't need them.
Making up headlines and sending those is much worse, and unhappiness with Apple is completely justified, especially for the reporters given credit for the false statements. Consider your reaction if you were working in an office and the person next to you frequently shouted inaccurate statements at you about things you needed to check. Yes, you could and would eventually ignore them, but it wouldn't be your fault if you wasted your time checking out one of those inaccurate statements, the problems caused by any reaction would be their fault, and someone in management might well tell them to stop doing that. We can't make Apple stop doing that, but we have lots of reasons to want them to. Asking users to turn the thing off is not good enough.
"Their view is that the AI koolaid is perfect."
AI generated content is all about cheap, not at all about content. It's so cheap that the AI koolaid doesn't need to be perfect or even potable, just flowing. And TBH they may get paid more on the incorrect stuff as more folks do the "that can't be right" click-through on the obviously wrong ones. (At least until everyone realizes they can indeed be wrong and then the whole news headline service loses share. At which point they reset and rebrand.)
Why are people so hell bend on being just... stupid and lazy.
Evolution. Thinking requires energy. We will instinctively take whatever shortcut we can to form a conclusion. And once we've reached a conclusion we are very reluctant to change it.
It's not a bad way to solve the problems that the universe throws at us but it's more suited to a simpler life. In today's complex societies we need something better.
The losers are the ones who watch their shitty videos all day instead of having a life
Loser influencer, influencer of losers, is how I read it.
How would you like to lose your money today? Buy my shitcoin! Too new for you? How about an old-fashioned pump-and-dump! Got no money left? Buy this overpriced energy drink it won't help you to get a job! .. No money at all?? Ok just watch some more ads then
It isn't up to users to turn it off.
Almost all this kind of so called AI needs scrapped.
It's dishonestly promoted.
It's producing garbage when not plagiarising.
The environmental cost is too high.
Make it illegal, fine companies 10% of annual turnover if they deploy it,
Nuke it from orbit!!
Oh no, the AI method of neurological networks and their current implementation is fine. The problem arises from applying it to things where they don't work and ignoring that they don't work.
A huge amount of AI is used in material research, component optimizing and so on, saving time and money there - and actually saving energy too in the end. But that does make it to the news, and is not abuse-able for marketing "Yay Notepad Need AI Too Yay!"
They demonstrably don't work in some situations. In that case how do you determine the boundary between those areas where they do work and those where they don't? If you have a hundred or a thousand instances of a system working correctly how can you be certain the next one - or hundred or thousand will still be OK?
This post has been deleted by its author
Theoretically, that would be a benefit. Theoretically, it should be easier to fix an error in something than to make it from scratch. In my experience, that's not how it works. Correcting a bad document into a good one often takes longer than writing a good document in the first place, not even counting any time spent to make the bad document. The theory is only correct when I'm correcting a pretty good document by fixing small grammatical or factual errors.
When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. That doesn't result in less effort expended, because you're multiplying the time it takes to check every fact alleged by the document by the number of times you generate something new, plus any time you need to make manual edits. The further problem is that, if someone decides to skip one of those stages, the document still looks like it is complete, but now it's the kind of thing that results in summary judgements against you from judges annoyed that you're making them fact check your submissions.
"When an LLM is liable to making something up and resting large parts of the document on that flimsy foundation, correcting it usually means removing everything but a mostly usable introduction paragraph and trying to do it another time. "
This is true. A general purpose LLM will do that a fair bit and no right-minded soul would trust it. But a more focus RAG-driven one used reasonably with final proofing is powerful. The more you do on it the better. The paralegal had so much training data from their archive that their Llama tuned itself in a few days.
"The theory is only correct when I'm correcting a pretty good document by fixing ... factual errors."
It goes beyond that, although those account for a lot! It finds contract law stuff that you would never find yourself. And it works a dream in property purchase for those reems of paperwork. A archive-tuned LLM with RAG, that is.
There aren't secret or copyrighted laws in these cases. There can be exceptions, for example where a law mandates a standard and ISO won't give you the standard without payment, but most cases don't involve that kind of thing so we can ignore them for now. The problem is that, even when you scrape all the laws and feed them into an LLM, they can easily mistake things the way they mistake lots of other things. A law means you are allowed to conduct a certain action, and you are sued for conducting that action, sounds like a match. Except the LLM has not noticed that the law allows you to conduct that action if you are a law enforcement officer in active duty following a disruption to communication caused by a serious natural disaster or terrorist attack, but that only appeared once in the training data so the LLM didn't recognize that you're none of those things.
Best case: a lawyer, paralegal, or other legal person reads the produced document. They weren't aware of the law, so they look it up. In the summary, they realize it doesn't apply to you. They throw out the document and start again. Maybe the LLM will produce something correct the next time. Result: the time to generate the original document and the time to review it for errors is lost.
Average case: A lawyer hands the document to a paralegal and says "check this". The paralegal reads the document and finds the reference to a law. They spend a while reading the text of that law to confirm that, even though the summary seems to limit it, the LLM which is supposed to be the next great thing may have found a cool loophole which will get this client off. They spend several hours checking this to realize that it doesn't help. They report their problems to the lawyer. The lawyer sends the report to the prompt generator. The prompt generator makes a new document and the process repeats. Result: several hours added to your legal bill.
Worst case for now: The lawyer hands the document to a paralegal and says "check this". The paralegal sees that a law is mentioned and sees the quote that the action is allowed. They check that the law exists, and it does. They check whether the quote is in there, and it is. They send the document back approved. Result: "Guilty. We are also considering contempt of court charges for council for the defendant."
Law firms trust AI? After some recent scandals involving AI "checked" case law, complete with plaintifs and outcomes, Im suprised anyone would let an AI anywhere near a law firm
https://blog.burges-salmon.com/post/102ivgu/a-cautionary-tale-of-using-ai-in-law-uk-case-finds-that-ai-generated-fake-case-l
https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ <- same the cnbc report
Those cases are rightly held up as a warning. They are pretty dumb examples though.
Here is one: A charity submits legal documents against a defendant. A small technical error, which is very easy to do and miss even when checked by a few people, that error gets picked up by Llama. The charity have time before court to resubmit their action and not lose 6-8 weeks for a new action to be raised after the error gets pointed out by the defendants solicitor at hearing. That happened 3 days ago. The charity would have lost thousands more pounds than they already have.
That action alone easily paid for the 700£ it costs to set up a local Llama.You cant argue with coin.
If you are talking money, then check everything 10 times anyway.
Well-heeled chambers use them. Llama. They have so much archive for training the AI it is very reliable after checks and cross-checks, etc.
That makes the assumption that the lawyers for the charity concerned had checked the documents fully and properly and had missed the error. Once you start to rely on Llama to pick up small errors, it is but a short step to using to find the larger errors too, and not actually have the lawyers themselves check the documents properly to start with.
What then happens when Llama makes an error, or misses something (which judging from the number of mistakes that these systems do seem to make in other scenarios, is inevitably going to happen at some point).
I would be disinclined to use any legal practice that starts to rely on these systems in their current state.
Oh, and if I did, don't even think of billing me several hundred pounds an hour for machine generated output ;)
On the other hand, as these sorts of system develop, become more accurate, more reliable, less costly to run and personal computer capacity and ability improves, it will perhaps become possible to spend a relatively small sum on a PC level version to do the same job, and cut the expensive lawyers out of the loop altogether ;)
There's been software to check cites for decades that's saved mountains of costs over those years. Automation of the rote tasks of law has always been a high priority because of the expense and importance of the work, so turning the next generation of tools on the problem is pretty much automatic. The expansion of AI into the next layer of abstraction is arriving now, is immature, still requires an extra set of eyes, but is improving constantly. Well-trained and well-maintained LLMs can be hugely important just as simplistic hack jobs are hugely problematic. Blanket rejection of use of these tools is just as ignorant as blanket reliance on them.
Lets be honest even without leveraging AI in any form, we could easily fire at least half of all office workers and it wouldnt make a difference.
Most people at work dont actually help or add value in any form, they are simply there because the system is broken and continues to allow fakes to exist.
I would presume that part of the prompt would be "for submission to the clerk of the xxxxx court."
I suggest that all LLM-written complaints/charges should be handled in such a way that if the case is dismissed because of grevious errors, that it has to be dismissed with prejudice.
Post that shit AC as much as you like, but you are basically saying "I am ashamed of this opinion".
That shame serves a purpose. It's to make you think about what you are saying and why you are saying it.
Be brave and post pro-AC stuff with your handle so we can see all your posts together and find out whether you genuinely believe this bollocks (in which case you haven't been following how well AI deals with legal processes) or are just trolling us.
Executives responsible ?
Is this a joke ?
WHen. has an executive every been responsible. THe executives at B signed off on changes that killed hundreds of people and they got paid tens f millions ...
Oh yes another example of corporate mindspeak that is completely the opposite of what the word actually means.
Responsible pairs well with Family and Culture in corporte bullshit lingo.
The real dishonesty comes from not knowing the models being used behind these services. Transparency is required...
There are lots of services out there claiming to provide access to 405b parameter models (and larger) but when you test them out, they provide results that are a bit sus.
It just doesn't make sense for any business to give customers cheap access to a massive model...given the amount of RAM required it just doesn't make sense.
I don't agree that AI tech is entirely crap, because what I've seen running on dedicated hardware using absolutely huge models is astounding and incredibly accurate with very few absurd hallucinations...however, the smaller you get with models, the whackier it gets...the intersection at the moment for "cheap" models is around 8b-12b parameters...those regularly hallucinate and spit out crap...I believe these are what is widely used and we need more transparency on it because I think a lot of people are being conned.
It also hands ammo to the old grey beard naysayers that have been against everything new since Windows 95.
LLMs keep demonstrating that summarising is their weak spot. They can shorten but, because they are inherently stupid and they have no idea what they are doing, these "AI" implementations are unable to distinguish the important from the not important. And that's key for summarising.
When ChatGPT summarises, it actually does nothing of the kind.
AI worse than humans in every way at summarising information, government trial finds
And yet this is one of the key "applications" AI is being sold on.
My own view is that if you ask someone (you employ) to summarise something, you are expecting them to read and understand it. Then write a summary.
If they are just using AI to do it, then why are they being paid as they are clearly neither reading nor understanding the material they've been asked to summarise - in other words not doing the job they've been asked to do.