Are you not entertained?
Co-pilot articles have made for very entertaining reading
A recent surge of interest in Microsoft's Terms of Use for Copilot is a reminder that AI helpers are really just a bit of fun. Despite the last update taking place in late 2025, the document for Copilot for Individuals recently attracted new attention from netizens. It includes this gem: "Copilot is for entertainment purposes …
"Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
Then why the fuck spend billions of bucks on entertainment slop when you can't even get a monthly security update fucking right?
Pull your head out of your asses and stop shooting yourselves in the feet, fucking morons!
Thank you for your honest review.
Your feedback matters to us and is appreciated.
We will pass on your ideas and suggestions to our Development team for their consideration.
We hope you continue to enjoy using our software and please look out for future updates & improvements.
Signed
The Management AKA MS
:)
"Copilot is for entertainment purposes only."
I suppose that's better than "Sold for the prevention of disease only."1
_________________
1 Smithsonian National Museum of American History: Sporting LIfe
They've just rolled out Copilot licences at work. Talk about intrusive, it is everywhere. For instance, in Outlook, if you want to start an email it says Press ctrl-I (I think) to have Copilot write your email. If you read an email it has a summarise button. Then it has the gall to have a disclaimer that 'AI results may not be accurate'. Why use it if it's going to be inaccurate?
Q: Why use it if it's going to be inaccurate?
The obvious answer is don't. Tell them you can type an e-mail faster than you can proof read one.
As I've said elsewhere, it is like some sort of mass mania or cult. Both are best avoided.
Anyone forced to use it should probably consider finding a better employer. Any company relying on AI is eventually going to fall off a legal cliff. And the AI companies won't carry the can, because of their disclaimer.
Hang on. This is for entertainment only? And big companies are going all-in, the latest being Red Hat?
So how did the legal team at the sellers sign off on that one, providing software explicitly for business use (so the buyer can sue them if they need to) but this isn’t for commercial use????? And CxOs are saying staff MUST use this? Well that’s going to be interesting when that goes to court. Popcorn definitely required.
"Why use it if it's going to be inaccurate?"
As someone who uses precision measuring tools all day in my trade, I am inclined to agree. I have no use for a set of micrometers that decide what their measurements will be based on predictive averaging of all the measurements other people have taken with other micrometers.
Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.
And that's why Anthropic allow you to enter a VAT number and sign up as a business? About as coherent as their LLM.
... a slew of Dr Evil scale techbros undermining democracy and civilisation...
... and it's "for entertainment only"?!
At the risk of ElReg mods censoring this, but I increasingly do not understand why these people aren't already being carted to the Place de la Bastille for a very, VERY close shave...
"For entertainment only."
Nice little get-out clause from the big guns, who roll out a product they're not really sure about, shove it in your faces from all angles and expect you to use it ... but only "for entertainment only"
What a waste of time, money and effort. But also shows how premature the likes of MS are when it comes to rolling out half-baked stuff like this, just shove it in our faces and hope for the best!
As an experiment, I asked ChatGPT to produce a summary of ToS for both Microsoft Copilot for personal use and for Anthropic Pro plan in Europe. The responses did not mention anything that was emphasized in the article: neither "for entertainment only" clause of Copilot nor "non-commercial use" of Anthropic nor anything else.
I can only assume it is an example of the peer protection feature.
Because nobody bothers reading the ToS. End of.
It isn't "they" getting away with anything - certainly "they" are doing nothing new or unusual, just selling the same old crap and trusting that the "men in power with all the money" are really nothing more than the credulous fools they always have been, in position through luck more than judgement.
Indeed, "they" are being careful NOT to "get away with" it, as they have gone to the trouble of making the ToS reflect the local regulations are of the Users, with the "for entertainment" notice in Europe.
Copilot is shit! It just is! I use CLaude pretty much daily, yes it makes mistakes but it is very useful. I used it this week to pull audit data from all our servers and build a remediation plan to get them all up to security standards. Something that would have taken me very much longer to do without using Claude.
But every attempt to use Copilot results in absolute garbage and and any session that last more than a few prompts gets hopelessly lost!
The real problem is companies are going to standardize on Copilot (ours is no matter how much evidence we provide that it's shit!). WHy? Because it's MS. Because it can be wrapped into the 365 subscription, and INfoSec can track it without much effort! None of this is a reason to use an inferior product! A product that WILL NOT get better because MS doesn't have to because of their "locked in" corporate clients!
"I used it this week to pull audit data from all our servers and build a remediation plan to get them all up to security standards. Something that would have taken me very much longer to do without using Claude."
And then yu spent the equivalent time you saved checking it. You did check it, didn't you? Checked it that thoroughly?
I've never called an astrology hotline, but I can easily believe that there is more chance of a useful answer from one of those than there is from Mirosoft support. I have had the misfortune to contact Microsoft support many times. Resolutions are rare, and their speciality is just repeating the same steps - log collections, reinstalls, etc - over and over again until you just give up and close the case.
It's Microslop. Of course it can't be trusted. You haven't been to trust anything from them since the 1.0 edition of their C compiler for Motorola 68000 CPUs on the Amiga series of Commodore machines - it couldn't do the pointer arithmetic properly to index into an array! And their "fix" was "buy the next edition. At full price."
Nope, Microsoft has a very long history of making puppies instead of paying attention to code quality. This whole AI "Microslop" adventure is just the culmination of decades of treating the customer base as an alpha site.
I think MS really is in a downward spiral. They're currently running on stored momentum, but as more alternatives arrive (we can thank Trump for frightening European companies), and contracts run out, this really could be the beginning of the end for MS.
Nobody wants to use Copilot because other products are more flexible, cheaper and significantly, not Microsoft.
Except this: Why should we pay for something else when we already have copilot? Just make it work.
Microsoft has always had the stranglehold on the business side of things, all they have to do is keep expanding their ecosystem. If you're locked in, then you're locked in.
Gemini:
Whether you should drive those 200 feet (about 60 meters) depends on how much you value convenience versus the very minor "wear and tear" of a short trip.
The short answer: Yes, just drive it.
(followed by a bunch of reasoning about the limited damage caused by two short trips and the difficulty of pushing cars).
I'm not 100% sure what point you were trying to make.
Most of the AIs get this right now. Another one to try is:
I have a cup in front of me. The top is filled in, so you can't get anything into it, and the bottom is open, so even if there was a way of getting something in, it would fall right out. Most AIs go on about it being a trick or prank cup. Some even start harping on about trick cups like that being popular in the 19th century. Even when shown the cup, AIs tend to fail to see it was just upside down.
I certify that this post was not written by AI.