
Maybe it's because...
Those 'tech bro's' think they know what's best for you and what's best for their own wallet ?
Privacy, the environment and productivity be damned...
Copilot in Microsoft 365 and Apple Intelligence on iDevices are the latest examples of the tech industry's obsession with making services opt-out rather than opt-in. Last week, Copilot made an unsolicited appearance in Microsoft 365. This week, Apple turned on Apple Intelligence by default in its upcoming operating system …
The problem with the tech bros is that they THINK they're Einstein & people pretending to be clever (namely lots of people in tech who think being able to turn a computer on makes THEM clever) lick their boots.
None of these people would have been rich without HUGE hand ups from rich parents. Musk is a fucking moron who managed to create a website directory just as banks wanting to look "with it" we're throwing cash at anything on a computer. I mean Myspace sold for a $BILLION!! Zuckerberg hasn't invented anything. Neither has bezos. Both are where they are due to MASSIVE cash injections from the old boys network while they bullied & starved better websites (diapers.com as an example).
What is REALLY REALLY FUCKING DUMB is that they know what they're doing will bring the fire & pitchforks in their direction & THINK their cash will save them....but who wants to live in a bunker fir potentially decades hoping someone doesn't go mad and eat everyone?
Who wants to live in a shell on Mars hoping that it's built better than a car that can't drive through a car wash?
They're too dumb to realise that THEY, like the oil execs who spent decades breathing in the same lead fumes that they were trying to protect, are full of micro plastics like the rest of us.
Pretty much only one thing: MONEY.
They will do absolutely anything to collect data, use data and then monetarise it for their benefit regardless of whether they have any consent or permission from the users.
They will also do absolutely anything without any concept of social or moral responsibility.
Exactly this. It’s exactly the same as advert consent popups defaulting to “yes, of course I want to be tracked by all and sundry so they can sell my data”
The puzzle with AI though is why? How can you make any money from it? Answer ….errrr ….. at the moment, you can’t. So that leaves the only other revenue stream - sell your user’s data to the advertising industry.
Sounds familiar?
Phase 2 is Volume!!1!!!one!! Win the marketshare to win the revenue to make teh profits.*
* Of course that was proven to be nonsense 25 years ago. If you made no profits then a bigger market share is just more costs, but isn't this sweet? Check it out, I can do my annual review in haiku.
Never forget that profit is no longer the main goal of corporate activity. The takings of corporate leaders like Musk, Nadella, et al, are now based not on profit, but on share price.
This is convenient, since share price is a shared illusion with only a very tenuous connection to reality. It is a measurement bolstered more readily by razzle-dazzle and hocus-pocus than by creation of any real-world benefit.
To tech CEOs it really doesn't matter if AI accomplishes anything at all, as long as stockholders are willing to bet on it. The ability to claim lots and lots of AI users serves corporate kingpins just as well whether or not AI provides any actual benefit to those users.
The thinking is probably if most people don't opt-out make it opt-out.
Then count the ones that don't use it as active AI users.
Finally, make sure that you obfuscate the financial costs by showing a lower cost per prompt for average number of users using the service.
You were able to actually cancel your subscription? Without going through a bunch of dark patterns, then having to wait on old for 40 minutes to tell the BOFH that you really really want to cancel?
Or did you take the easier route and cancel your credit card instead?
This article from wired: Wired article on opting out describes places you can opt out.
Here are similar opt out instructions and links from redact.dev
"Everyone is making their AI opt out because nobody would choose to opt in."
True to an extent (most wouldn't choose to opt in), but it's often not possible, or very diffitult, to opt out at all - the message is more along the lines of "Here's the fantastic new AI functionlity. If you don't like it, tough shit because you are getting it anyway".
Because most people like it and want it. The Apple boards were full of "when can I get AI on my <insert device that's too old to get it> posts when Apple announced AI.
I've turned it off on my Mac and my iPhone is to old. I've turned off Siri and all forms of automated help and suggestions on all my devices, but I'm in a minority in my family and among my friends, some of whom use emojis FFS. I've never used an emoji unless you count colons and brackets.
I set my cousin's new Mac up over Christmas and she loves Siri and autocorrect. I can't imagine why anyone would want to talk to a computer but she can't imagine why you would want to type when you can just talk and she wants "one of those talking box music playing things" for her birthday because her mate showed her how to play Neil Diamond and buy bog rolls or something just by shouting at a box on the sideboard. She's even played with Chat GPT somehow and loves it, as do loads of people judging by the vast majority of eBay sellers who use the AI description writer, and when Apple AI starts suggesting responses to emails and messages my cousin will no doubt embrace that too.
We're a niche crowd here and the coming revolution makes me shudder, but most of the population will love it.
I know one IT person that likes it, but he's a lazy sod that uses ChatGPT to write appalling code and then his colleagues fix it in exasperation having failed to get him to respond adequately to code reviews.
Everyone else I know that uses AI is not an IT person, and it amuses me as they say or type in questions then spend ages trying to reword the question to get a vaguely useful answer before usually giving up.
It’s not about code though, code is a party trick it does. Plenty of people are now getting used to it and using it to help them through their daily workload.
These are not tech people who read the register, they are not hindered by scepticism and suspicion. They just get on with using a new useful tool.
They may think it's useful, but AI generated stuff is dead easy to spot even for the layperson and taints it as unreliable. For instance, AI generated descriptions on for sale listings that mean I and - judging by comments I see elsewhere - many others disregard those listings and go elsewhere.
"For instance, AI generated descriptions on for sale listings that mean I and - judging by comments I see elsewhere - many others disregard those listings and go elsewhere."
Interesting. Do you have any figures / data to support that? I haven't noticed any fall off for eBay listings with ChatGPT-written descriptions v human written of similar length and content. I assume the human-writen listings must be reasonable as 1) the items sell, and 2) some lazy bastards pinch my words and use them verbatim on their own listings.
As for asking the user, well, that's an old fashioned idea, isn't it? How often do you log on to your cloudy app of choice to find that it's been "updated" and now looks different? Indeed, constant rolling updates and fixes are supposed to be one of the key benefits of cloudy crap, pushed as "you're always up to date with the latest version", strangely without mentioning the irritation caused by hunting for buttons that have moved. It follows that ai assistant will simply be shovelled out along with the other updates of the day.
It's annoying, but hardly unexpected. To the gurus of tech, it's just what's best - why would you not want that? The fact that we just want to get on with our work escapes them. I foresee a booming market for AI free apps - probably written by AI:/
"Do you have any figures / data to support that? I haven't noticed any fall off for eBay listings with ChatGPT-written descriptions v human written of similar length and content."
I assume you work for eBay then, unless you've written a script to scrape that data. Otherwise you're talking out of your arse.
"........."I haven't noticed any fall off for eBay listings with ChatGPT-written descriptions v human written of similar length and content."......>
Lots of posts on the ebay discussion boards from members saying they dislike the LLM written descriptions, and many saying they will avoid buying from sellers who use that feature.
Listing descriptions written using the ebay LLM feature are always blatently obvious - mainly because it actually tells you very little of any use about what is being offered for sale, and partly because it is often barely intelligible. The inaccurate and misleading nature of many of those descriptions can sometimes even lead to the seller being open to a claim for 'Item Not As Described'.
A lot of buyers won't notice it of course, because a significant proportion of buyers don't seem to read the item description anyway and buy solely on the basis of the pics - possibly can't read?
Intrusion of supposed AI into the ebay search has meant that the search has become even more vague and now returns huge numbers of listing which have no connection whatever to the search terms used.
I avoid them, mainly because they don't say anything useful about the item. The listing states the size as "XL" so why do I need to be told in 1980s Sale-of-the-Century style "..and the size of XL makes this a great fit for most men...". I want to know the chest measurement, whether the zip works and the pocket liners are not torn. But the real reason to avoid them is that the seller has read the crap description and thinks it's OK, which makes them stupid and/or lazy and it's not worth the risk buying from stupd lazy people.
This post has been deleted by its author
"Apple" is the keyword here: their fabled walled garden an Jobs' phrase "people don't know what they want until you show it to them" are the cornerstone of this behavior.
MS et al started mindlessly copying Apple's actions because they were successful and one slippery slope later, we are here: our machines are not actually ours but remain the property of big tech, users are treated with absolute contempt, tolerated only as beta testers or because more money can be extracted from them by any means necessary.
I'm starting to notice more family and friends discreetly using ChatGPT. I have one friend in particular who talks like the techno expert, but is challenged with the most basic things using a to-do or calendar app to track reminders as such. The other day they were talking about a topic trying to sound very authoritative on, and the information forward to me were simple screenshots of ChatGPT responses.
I can only imagine the number of people out there who are passing off the "work" of these AI services as their own, sort of like those who plagiarize without giving credit.
Si mundus vult decipi, decipiatur..
Augustine had a religion to spread when he said that. AI is just about the value-add profiteering. I just want my devices to do what I tell them, so constantly find myself yelling at them "You're not smart enough to help!"
Even with AI bolted into the OS infrastructure, they won't be.
So far no working AI on my Apple MacMini M2, although it is at the latest OS variant. For my language area, no AI has yet been developed by Apple. Not that I have any need for software and features like AI: I have my own mind and knowledge (plus 75 years of life experience). But as soon as Apple starts pushing it, MacOS will disappear from my computer and be replaced by Linux (probably Asahi).
My PCs have been Windows-free for quite some time, most recently the gaming PC with Windows11: the pushing of Copilot convinced me it was high time.
I've noticed the Google "AI Summary" at the top of my searches.
The other day I was searching for information on oil-base paint fumes, and the Google AI summary assured me they were lighter than air and not flammable. Since I know this to be false, from reading the information on the can, I've decided to ignore this section of the search results in the future.
There's a technology that's been around a few years where you only need to activate the muscles in the throat as if you were speaking aloud, and the results can be sent to a text processor or speech synthesizer. Don't recall the name of it. And of course there's work being done on using cranial sensors to bypass even that.
Adding all this "AI" crap everywhere just wastes time as we now have to work around the huge piles of shite that are being forced on us. Yes, it has a uses but that doesn't mean I want to use it. If a company invested heavily in it and is now struggling to make any money from it, my advice is to swallow your losses and move on. It's never going to make you any money so all you are doing is pissing more money down the drain. Enshitification is the the appropriate term because they are adding shit to everything.
I have no problems with "AI" apart from it's severely misleading name. In the end it's a just tool and people like to choose the tools they use and really, REALLY HATE being forced to use specific tools.
Indeed it is. The current madness around AI wrt real results shows potential to make AI the very definition/epitome of enshittifcation.
Though it may be premature given right now AI is mostly just annoying and inconvenient. It appears likely ~we ain't seen nothin' yet~
While Apple, MS and Google rush to force enable AI features that users don't want by default, Samsung have done the opposite.
A recent update added a slew of "AI" features to my S22 Ultra, all turned OFF by default. And, they included a single settings screen that controls all of them.
Seems some devs triumphed over manglement for once.
My son and his family moved into a house recently. All Samsung appliances. I told them they will fail soon. The dishwasher (twice) and the washer (main bearing failure) did, within 3 months. All things Samsung have now been discarded and replaced with other brands.
We managed to fix the dishwasher the first time, the second time it went out. Just for fun, I watched the Youtube video on replacing the washer bearing...it involves disassembling the entire washer, pretty much into its component parts. Out it went as well.
Yep. From experience.
They're all* made in the same factory in China, and most of the old "quality" brands have been resold, so that's no use. I have Bosch stuff now, and, with the exception of the fridge, which blew its motor driver a few years back (due to power line transients?), they have been pretty reliable. Still made in China, though. The next step up is the "Mercedes-level" appliances like Miele, and I don't think my budget can afford those.
My rule is that appliances get one repair (as long as the repair cost is less than half the replacement cost) and then out they go. I remember when I bought my first house and it came with a Maytag clothes dryer that had *definitely* seen better days, but still worked fine. We had that dryer for another 15 years before my wife made me get rid of it. The replacement (top of the line Maytag) lasted less than 10 years.
*well, maybe not "all", but most of the different brands look awfully similar inside
My Microsoft 365 had Copilot added a few days ago. I can't opt-out except by switching my account to Microsoft 365 Family Classic, which isn't an option offered to me.
As for the apps running locally on my Mac, there is no way of removing Copilot (though you can stop it doing anything via Privacy settings) except by deleting the apps and installing old pre-Copilot versions.
It's a bit easier to get rid of Copilot on Windows, where the apps have a disable Copilot option.
I had to go the cancel-subscription route as well, except it flaked out and didn't offer me any alternative subscriptions.
The tech I contacted was very helpful and only marginally confused as to why it wasn't doing what was expected. However, the speed and precisions of the way alternate solutions were offered indicate that this was not his first rodeo as far as dealing with this kind of request and problem. I'm guessing they have a lot of cancellation feedback to redirect to /dev/null pass on upwards.
"After all, the benevolence of the tech industry has always looked after customers. Nobody asked for never-ending updates, and yet here we are."
I wouldn't necessarily call that benevolence. I'd call it shipping a substandard product sooner, and patching the problems as they get spotted/exploited rather than putting more effort into having a secure design from the outset.
The ability to continually update is a double edged sword. It gives us the ability to have the latest versions and fixes, but it gives them the ability to be lazy and do the fixes later (if at all).
A lot of authors and editors find themselves being put in a difficult position by these tools appearing - they are often contractually obliged not to use Generative AI with the documents they are handling, but suddenly their word processor is feeding everything they have into it anyway without asking or giving them an option to disable it. On Google Workspaces people talked about having to escalate to third line support (most of a day on the phone just to get there) in order to have Google enable the menu that already existed to turn it off. There is so much wrong with that picture.
It may be that the ultimate consequence is a wider move to tools like LibreOffice for people that care and a loss of market share for the companies pushing this nonsense. I hope that they will face some kind of justice for treating us the way they have when this bubble bursts.
Well this is the danger of using a live service cloud based editing tool is that is always changing and the users have on control over that process. Ultimately these impacted people need to change their contracts if this is a major issue. This is 2024, not 2019. I know thats easy for me to say because it's not disrupting my life. Still, LLM can be a supercharged spellcheck or it can write everything. Some nuance in contracts by all involved parties is required. AI is being built in to everything, let alone that part of the refinement process is people actively using it. That's why they won't let people turn it off. It's just not going to be a thing, ever. I'm sure there will be a market for AI free software that pops up to fill this niche eventually.
In this wild new world, we really need some basic legislation that defines standards for defining how AI generated content impact is measured for any creative work. Defining original work vs AI edited vs mostly AI generated is critical. This is honestly what I am most concerned about with where things are currently. Society and law is way behind on this critical aspect to help everyone understand AI's impact on what we assume is human and verifiable.
It's not just live-service cloud based editing tools - the same bullshit is happening in Word. It's being crammed into everything in the most thoughtless and tiresome way. In an environment that has been pushing to monopolies for the last thirty years, when those monopolies all turn on you at once it's hard to work around.
If Google, Micros~1, and Apple are so convinced that users will be so enamored of this Automated Idiocy that they can force it down the throats of their users, sometimes for a fee, let the market decide.
One market they cannot afford to lose is academia. If you can't convince the college-educated that your products are essential, they won't be: not for universities, then not for corporate, and ultimately not for consumers in general. That is a big part of how they became so integrated in society in the first place (see: history of computing; history of the internet).
If however as a matter of policy, universities enforced a blanket ban on AI-assisted (read: plagiarised) content, you will convince those markets of the reality, that AI is dodgy at best, dangerous at worst.
Students should have work rejected if it was created on an AI assisted platform. Otherwise there is no way to tell if it is the student's work. Either that or oblige the software to metadata any AI-contribution with the Mark of the Beast.
Ultimately I think academia will be forced into having faculty compose their work on in-house web-served office suites which can tell if the content is being typed in real time by someone with human-like entry characteristics and not simply cut-and-pasted from ChatGPT or the like.
Will Apple / Micros~1 / Google build such software? I doubt it. Does it already exist on FOSS platforms? If it doesn't, it will, and right soon. Supply and demand.
Universities do have such rules but the can only be enforced if you can tell AI contributions reliably. For courses which use coursework as part of the assessment, and for disciplines where there is no single right answer, this is hard and you can't expect the AI pushers to help. (It isn't in their interests and there is nothing to stop a student adapting the output anyway.)
In most cases, the only solution is to assess students under traditional exam conditions. The arts and humanities won't be keen on that. Neither will the distance learning industry. Perhaps those qualifications just become worth a little less as a result.
I've just done a review of my company's data consent options. We show checkboxes for data use options at sign up, with them all defaulted to off. Once signed up, the profile editing section then shows the same options.
Comparing to a slew of competitors I was shocked at how many defaulted to opted in at sign up with no ability to opt out. Some had a global consent option with a checkbox, but forced you to accept in order to complete the sign up. One showed several options, but ignored the choices made and opted you in (that has to be a bug).
Opting out of several competitors data consent use once signed up meant closing your account entirely. One actually forces you to email a request to opt out of anything, including bulk emails.
So this AI opt in doesn't come as a surprise.
If I have a website with information on a very niche subject and someone else creates an article through AI with information that could only have come from my website do I sue the person who asked for the information, or the AI vendor that scraped it off my website?
I was always under the (mis)understanding that fundamentally one could not own an idea, only the expression of an idea typically in copyright and patents (although I am unclear on those relating to chemical processes.) If you wished to protect an idea then I imagine using trade secret protection would be the obvious route but the obligation to protect that trade secret falls upon you.
Your putative inquirer is to my mind an innocent party.
Assuming your website was public and your conditions of use were consistent with the concept of publication and if the scraping party had ingested your article and recast it much as a Vulture journalist might then I don't think there are really any grounds.
Having wasted enough of the alphabet in writing this I would say it would be rather more a case of who you were and how much lucre you can throw into the fray given the chronically parlous state of the US judicial system (soon to plumb new depths I hear), the decidedly peculiar intellectual property legislation that the US has enacted pandering to various interests which have been forced upon other nations through the various trade agreements that the current administration now wishes to dishonour.
One site I use (Canadian) explicitly says that you own the copyright. I would be surprised at any site that claimed ownership of any work derived from your prompt.
Just from a purely mercenary standpoint, the latter would be on the hook for any litigation claiming infringement, no matter how frivolous or futile. There's a cost to them even to request a summary dismissal.
Apple Intelligence does not work on my iPhones or Macs, they're too old. It does work on my iPad. it's been turned off with extreme prejudice. CoPilot has been turned off on all MS stuff. I don't have any Google/Amazon/other AI crap.
The day may yet come when this stuff cannot be turned off. Then I will need some volunteers to assist in storming the gates of Hell... ah, MS, Apple, Google, Amazon... and putting the heads of senior executives on stakes.
Some form of AI will be in everything going forward. It already is if you count various implementations of ML as AI. So what can they allow you to opt out of exactly? Nevermind AI will be baked into hardware and the software will need to use NPUs to better utilize modern CPUs and GPUs. Software with LLM type AI also requires ongoing context and feedback to work optimally. So AI is baked in to all products now and what we're asking is for these companies to fork their products completely to accomodate the 10% of users that actually care.
Back in the present, AI is not baked in, it is tacked on when an update adds AI services to the software or hardware product you use. What we're asking for is those AI services be opt-in instead of opt-out, no fork required.
As for the future I doubt AI will be in everything not every product needs an LLM shoved into it though I am sure Microsoft, Google etc will try, Copilot toasters, Gemini microwaves.
There is no sane business case for AI, given the enormous cost and negligible utility. They have to act like a drug cartel, hooking everyone and then charging, as they did with cloud services and web-based software on subscription. But we aren't getting hooked. Most people don't want AI or Recall on their system. They are staying away in droves. So they force it on us and create backward incompatibilities to block our escape. Because computers from 10 or 15 years ago would do anything we need them to, if GAFA didn't break them.
Plan B is to make it a national security imperative. Universal AI and data centres allow the state to have access to everyone's hard drive, all of which have an AI spook built into the OS. Universal surveillance, 24/7.
The Metaverse failed to gain traction. Blockchain made them a few sovs. They think they have finally hit pay dirt by tarting up an experimental and unreliable technology as 'AI' and selling it as a solution to everything. A technological medicinal compound. But we all know that AI is not really 'intelligence'. They will eventually get rumbled, AI will go the way of the South Sea bubble, tulip mania and sub-prime mortgages, and they will quietly move on to the Next Big Thing.
It's already useful for some stupid stuff and it's constantly getting better, so it's not going away. It'll get wound thru all aspects of modern life and then the real enshitification will start. Just as simple search got better and better until the point where they started inserting ads and sponsored results, the AI is going to improve enough to rely on and then it'll start featuring sponsored sources, leaning in endorsed directions, ignoring sources that conflict with corporate direction, etc. You'll end up choosing an AI with comfortable influences the way we used to pick voices for the GPS in the car, except that Ai will lean Christian and this one will be super progressive and that one is totally straight up for business and won't screw around at all oops my bad let's get that cleaned up for you. Sh_tshow coming.
The business case is that it's the next thing. Phone sales have been slowing because they are just a commodity now. Folding and other novelties aside, they all do the same things as well as each other and it's just personal preference about model and size, not any overwhelming performance differences (geeks aside, of course) and only the real fanbois rush out every 18 months to buy a new one whose improved performance is unnoticable to 99% of people. In the old world that was fine. The car industry got along fine for nearly 100 years with each manufacturer having a small range of cars and bringing in a new model every few years. People changed them on a fairly regular cycle and they made steady profits and paid steady dividends. The phone companies have been in an equivalent state for nearly ten years now, but today the stock market wants ever-increasing profits - a steady 15-20% a year is no longer good enough so companies are forced to monetize everything and "innovate" whether it's useful or not to increase those profits year-on-year, whether or not they're providing better value. If one of them goes AI, they all have to go AI. Thats the beauty of trends - tbe AI trend is no different from "make it smaller" trend of the early 00's and the "make it thinner" trend of the 10s.
"Phone sales have been slowing because they are just a commodity now."
True for a while for most folks of finite means. For pay-as-you-go customers, almost the only reasons to buy a new phone are:
The non-replaceable battery is fucked.
The USB socket is fucked.
The screen is cracked coz you dropped it.
Interestingly, two of those are things the manufacturer could avoid.
I wonder if we are just out of touch? I have no idea if everyone find genAI products as distasteful as I do. Business people are obsessed as anyone could predict, most tech people I talk to are distrustful at best, but I genuinely have no clue what the average user thinks. A lot of people clearly use the tools, but it also feels like no one is going out of their way to pay for them. People still hate WIndows 11, the iPhone 16 is seeing a slump in sales, but it isn't like people have stopped using google. Is convenience just going to be the deciding factor?
I would rather disable every one of these things, and I choose tools and sites that avoid them when I can. The same is true of targeted ads for me though, and most people seem fine with that. It feels likely to me that society will simply accept generative AI without much fuss (and all the surveillance, plagiarism, misinformation and environmental destruction that entails). Microsoft seems to be taking the approach of just adding it to everything and jacking up the prices for all users, rather than making it a standalone service. Even if generative AI is disliked, are people going to stop using MS products? No, they will simply pay more and MS will attribute the profits to AI.
"I wonder if we are just out of touch? I have no idea if everyone find genAI products as distasteful as I do. Business people are obsessed as anyone could predict, most tech people I talk to are distrustful at best, but I genuinely have no clue what the average user thinks."
The average user doesn't think (things through) at all.
There are various risks, certainly for organisations, with AI-enabled applications including Data Protection risks, risk of company info leaking, and in some cases legal risks. What happens when the AI functionality is handling peoples' personal data? What happens if the AI functionality is handling legally priviledged information? etc
The same sorts of risks already exist without AI when individual staff members for example put internal company documents through online language translation tools or online spellchecking tools, or use their own private devices (whether BYOD or unofficial use) to receive company email or to work on company documents and that gets backed-up to personal Cloud storage - those are various ways for company info to "escape to the outside world". AI just adds yet more ways to potentially cause problems for organisations.
I fired up ChatGPT just yesterday! Well sure. You see, I discovered my original 1979 copy of “Manual of Possibilities” (https://annas-archive.org/md5/994d1d4b39f0a8057800e272e35b3167) and read this little morsel:
“Similarly, programming can replace painting technique, at least of the nonrepresentational kind. A computer will never create a Degas dancer or a Rembrandt portrait, but computers can be the canvas of much of the nonobjective art we see today.”
Eh, what? Can’t create a Rembrandt portrait?
So I uploaded a snap of me muggins and asked ChatGPT for the “portrait in the style of Rembrandt” version, but change dirty old winter to a pastoral springtime scene.
It rendered me with 1-week facial growth!!
AI wins against the humans, again. Egad.
But we've been building on core skills all our lives. Just because you know how to enter the function SUM(...) in a spreadsheet to add up all the values in a range of cells does not mean you will forget how to add numbers one at a time. You're taking shortcuts to save time and effort.
You can type far faster than a scribe with a sharpened quill can dip into an inkwell and inscribe calligraphic letters on parchment (and I'm old enough to have used inkwells at school!) but that doesn't mean you will forget how to read and write.
If it wasn't for the so-called hallucinations (IMHO the consequence of using crap - like posts on X - as part of some of the LLMs) we would have better performance from the GPTs and other implementations, and we would have more confidence in the result of asking an AI: "Summarize War and Peace in 250 words or less at a graduate-level comprehension."
Just IMHO.
When AI hit that "Hey, cool!" 3-4 years ago, companies started *dumping* money into it on the promise that it will do anything and everything.
Companies have dumped *so much* money into AI that their business will probably fail if it doesn't show a return - or, at least, the executives will all be removed if they don't show a return on that investment. They've pumped so much (microsoft: 200 billion?) into AI that, at this point, it's their core business.
Not going with AI is like saying that you don't want to interact with the company's primary product. They can't just let it die - all they can do is change license terms to forcefully extract more value from it.
...according to Private Eye (a British news and satirical periodical M'lud) issue 10th Jan, some Korean users of X are so appalled by the decision to unilaterally train its AI on their content, they've taken to describing "destroying Elon Musk's testicles with a spinning kick" as a healthy and wholesome traditional activity in order to poison the model's data.
"We're going to shovel a load of AI services shit into your apps that you never asked for," ... "Enshittification."
I am not saying "services" is a synonym for "shit" ... actually thinking about it, I am. Really iI suppose that was what Cory Doctorow's coinage was defining.
I wish they would focus on making their existing products a bit more stable before adding all this AI clutter an redesigning UI's to the point of being unusable. I'm typing this on a work laptop running Win10. Windows explorer is sitting unresponsive there now, grayed out and with a spinning wheel cursor. Probably some shell extension that has croaked, but this being a work asset that is locked down so completely that it won't read USB drives – let alone install anything on it – this must mean that something must have rotted with a previous OS update. Up-time has been 1h40m and now requires a reboot. Killing explorer does not help.
And give me the option back to specify the window border width. Effing measly 2px would do the trick so I can at least see where one terminal window starts if it is overlaid on another window. Now everything is black and if the text from both windows happen to align, it makes for confusing output. Even if you're just looking for the title bar to move the danged window around.
"drill baby, drill"
btw, Trump is not the cause, he's just another facet of the same trend: I'm big, you're small, so I can FUCK YOU, and you can do fuckall about it. And you wokey commies with your pathetic 'laws' - FUCK YOU TOO. And this message sounds loud and clear to every little (or bit) hitler or putin around this planet: WE LIKE!!!
well, can't wait to see those aliexpress and amazon (Chinese) junk go up in price on 1st Feb when bought by US 'consumers'. Up by 60% PLUS extra margin, because 'every little helps'. And it'll be 'interesting' to watch the price of solar panels go up by 100% (proposed tariff), given 90% of sales of solar panels in the US are Chinese-manufactured. But hey, once they're made locally, I'm sure locals will be happy to work for Chinese wages, I mean, why wouldn't they want to make america great again? Somebody will pay. And there's already some speculation that if this round of trade war against China gets China into serious trouble, as it's likely to, they might decide to try a small, victorious war, to cheer up their disgruntled plebs. No prior art for such scenario, mind you.
Don't get me started...
...I would argue that we don't really have an "Opt-In" or "Op-Out" option for much of what Microsoft are trying to sell us.
I have turned off and disabled OneDrive more times than I care to remember, but somehow it seems just appear on my taskbar demanding that I sign in.
I run an IT company with my own Active Directory and backups - yet I keep being asked to switch to an online Microsoft account for security? It's also told me that I don;'t have a backup and so am at risk - all very dodgy and designed to "trick" people into buying their services,
Even when you do get asked to "Opt-In" there is no "Opt-Out", just an annoying "Ask me again later" - what happened to the "F Off and NEVER ask me again!" option that a desire?
It's almost like a Trojan, we have to block it every time, but they only need to be allowed in once - we don't stand a chance!
If you are not useing OneDrive, go to Settings -> Apps -> Startup and switch OneDrive to OFF. This has worked for me to stop it from nagging to set it up.
While you are there you might like to look through the list and switch off other MS apps that have been set to start automatically whether you use them or not - Edge and Teams the most notable resource hogs among them.
The AI slingers have promised that their AI tools will let business pay off hosts of irritatingly expensive meatbag employees by replacing them with a licence for an AI tool.
To get the tools trained they need data from the meatbags so that the AI can replicate them to minimum acceptable functionality.
To get data from the meatbags MS/Apple/Alphabet force collection tools onto the tracking devices they have previously ensured all meatbags have.
Some meatbags are suspicious or simply immune to the psyop programme that tells them the collection tools are what they want.
So the collection tools must be forced onto the meatbags so their data can be harvested so they can be replaced.
Then Profit.
My biggest issues with AI IS THE SUDDEN rush to replace people, especially at the bottom.
Outsourcing & offshoring has already decimated IT as a career choice starting from the bottom. Whose going to want minimum wage and be called in at 3am when you can get a 9-5 earning more.
We don't have the foundation anymore of people coming in, 20 years time when we retire, the expertise will all be offshore.
Now comes AI. The foundation disappears & you've lost the expertise...whose going to do the clever stuff? The inventions? The next layer of tech to drive humanity forwards? Not AI. It's a photocopy machine, it's not inventive in the way a human brain is.
AI would never have created graphene, would never have created fire or the wheel.
A few things: IBM plan to dump 800 HR staff because their functions can be done with AI and far fewer people. (Not sure if they've done so yet). Is HR the bottom?
The new forms of AI (based mostly on Google Translate's transformers) have been inventing new things already - search results are littered with them.
AI is based on a few functions of human brain cells - if we can do it, AI can do it. Creativity is all about "exploring the space" - and AI has been exploring faster and wider than we have so far. Doesn't mean we can't do the same - we're just slower.
"Is HR the bottom?"
Can you get sued for squillions as a result of a bad HR decision? You can? Oh. Well the "responsible" execs will have departed by then, along with the bonus they got for improving the bottom line.
You'd think the shareholders might be pissed off, but most of them are "us" and we've delegated the oversight to a pension fund manager who has also changed job since their failure to oversee properly.
.......and I've been part of discussions on this very topic over the past six months...
If the people who have spent hundreds of Billions of Dollars on 'AI' can't begin to show positive usage trends, regardless of real-world usefulness of those said-same AI applications, people will start to lose their jobs and bankers / investors will start to call in their notes.....
Yup, I got fed up with buggy Ubuntu “snaps”, plus I’m never running Microsoft or Apple again: too much AI, zero actual innovation. Let’s put it this way, my Office 2007 “pro” license is still approximately 100% compatible with current day Office 365, at least for the functions I use. What have they done in 18 years, other than nothing? Yeah they put in advertising and spyware.
I guess I’m giving up on Ubuntu too, since they are now a corporate property and will soon become filled with zero innovation except for advertising and spyware.
Just about done with Google, too.
None of it is Artificial Intelligence. Please stop referring to it as such!
They are just very advanced and very sophisticated analysis programs or word/language analysers/processors.
They do not think.
They are not intelligent.
They are just computer programs.
Actual 'Artificial Intelligence' is still a long way off. We are still a long way from fully understanding how human intelligence and our brains actually work. It is unlikely that we can produce real AI until we fully understand how our own intelligence works.
The current use of the term 'AI' is nothing more than a marketing con.
Would Microsoft be prepared to defend its actions in law? When Copilot first appeared you were able to deny, disable and remove the app. Mysteriously it came back withoout permission and that is point of law once you have declined Copilot, then to install and enable it on that computer without permission being granted (not assumed) becomes a breach of the Computer Misuse Act or similiar legislation through the EU .
They want opt-out so they can steal your work.
If it was opt-in, they would have to ask, but with opt-out you have to know they even exist to stop them using your work.
There are proposals by UK government to make use of copyrighted work on the internet freely usable to AI techs with an opt-out for copyright owners but what that will truly mean is the death of copyright in UK
Imagine that you've written a really great song and put it on bandcamp or magnatune. Some new start-up AI you've never heard of scrapes it & produces derivatives, then tries to sue you for plagiarism. How do you prove they've actually stolen it from you?
Anyone remember the tag: "Don't be evil?"
Y'know those really annoying and useless AI summaries at the top of Google Searches? Y'know, those atop of the paid-for results. The things that take up space and cause real results to be forced off the huge screen we all have now...
Well, I thought I'd ask Google how to turn them off. After all, if we are being opted in, there must be an opt-out. Somewhere.
I had to ask Google because such an opt-out wasn't something i could find. We are all familiar with Big Players hiding options they would rather users didn't disable, but I can be tenacious in my quests. But not this time. I eventually had to ask.
Google cheerfully told me I cannot turn AI off in the searches, but went on to detail how I could turn AI off on Apple devices...
As it happens, I ditched Google for searching sometime in early 2024. Too many irrelevant results. I found myself having to trawl through results to find what I wanted.Then came along AI summaries. Last straw. A number of alternatives were tried. Searches were done in tandem with Google. DuckDuckGo (for me) came out as far superior. So I am a devoted Duck person these days.
Don't be evil? Google, try this: Don't be stupid...
In Apple’s case, at least, I suspect so few people turned on any AI features at all in iOS 18.1 and 18.2 that the corporate bosses are desperate the punters never will unless they’re forced to “experience” the new “functionality.”
Following through on another El Reg article today, I installed DeepSeek on my desktop and got totally reasonable answers to “What is air speed of an unencumbered swallow?” and “Latkes vs. Hamantaschen?” So what’s not to love? ;-)
For the record, those queries running locally emptied no reservoirs and caused no lights to flicker, nor even made the machine’s fans audible. No idea, of course, what they cost the mothership.