"Whatever legitimate places AI has, inside an OS ain't one"
Is there a button for upvoting an article? The only thing wrong with it is the implication that there might be legitimate places for AI.
Making software would be the perfect job if it wasn't for those darn users. Windows head honcho Pavan Davuluri would be forgiven for feeling this of late as his happy online paean about Windows becoming an "agentic OS" was met by massive dissent in the comments. "Agentic schmentic, we want reliability, usability, and stability" …
There may well be. After the last AI boom (remember Expert Systems?) there were a few useful tools around, but only a few.
Just because certain people are trying to ram AI anywhere doesn't mean that it is automatically without use. Blindly vilifying AI is as much an error as bashing it in everywhere.
My Linux workstation is remarkably AI free. I have a Docker with Python and my own LLM's, but they can be deleted with a single line of code.
I control the OS, not some corporation. Windows is the other way around and you'll pay them monthly to access your own data and like it.
But more likely to be right most of the time.
No, it's entirely missing the point.
Get annoyed at the ridiculous hype, not the technology.
AI is statistical inferrance, if you have workloads where pulling useful data out of a lot of noise then it can be really useful.
One example is drug development. AI was used to massivly extend the number of mapped protiene structures. But that work was then turned on it's head, with scientist realising they could design a target protene structure and have the AI work backwards to develop it.
There are areas where a new drug can be thought of in the morning and in the lab in the afternoon, a process that used to take weeks. This allows the trial an error that most drug research is to be carried out much much faster.
AI is not going to suddenly change your life by hepling you directly, but it might well have extended it a few years when you reach the end.
Another area where AI can be genuinely helpful is with all sorts of translation, the ability to live translate speech to another language can be a great help for people that don't speak the language concerned or for you trying to ask something / explain something to a person not speaking any language you know.
A similar thing to this happens in the world of cycling. Oval chainrings are one of these ideas that seem obviously useful but which aren't actually much cop when you start to use them. This one has popped up in the cycling world at least three times and generally gets reinvented, hyped like crazy and in the face of complete disinterest forgotten about for another few decades.
The same will happen with LLM AI systems, save that they mostly won't resurge again ever. Yes, they have uses same as blockchain has uses, but nowhere near as many as the proponents of the things think that they have.
No, you're thinking of crypto.
But I see what you're trying to say, and I have to disagree. There are clearly valid uses for AI. In my work, I can dictate an email reply (usually a short one), and AI tidies it up, makes it less aggressive if need be and sometimes adds a flourish of useful info. 90% of the time, it produces an email better than I would have written in 10% of the time. I can use AI to analyse things, tidy up tables, summarise articles and it saves me time. I know plenty of people who do the same. AI might be a misnomer (it's not "intelligent") but it's a useful tool a lot of the time. No I didn't write this with the aid of AI
> The only valid use of AI is to separate the gullible from their money
There are loads of useful techniques that have come out of the AI labs over the decades[1] - and some have been vastly overhyped in the past, then "died" - but they didn't die, they are still around, being useful. Except that it is not the done thing to refer to them as "AI" any more, they are just "obvious everyday algorithms".
Or, as I've said before, "if it works, it ain't AI" (any longer).
[1] some of which are already mentioned in other comments here, but consider Expert Systems, Planning Systems, anything to do with Natural Language, Machine Learning (in properly defined and controlled domains), a lot of "normal everyday" image processing techniques like edge detection and everything leading up to Vision Based Systems like Face Detection and the Face Recognition (and not just by dumping everything into an LLM and hoping, which is being tried nowadays).
I'd make note that when an AI winter hits, those tools aren't called “artificial intelligence” any more — they're called what they were supposed to be called, e.g. “machine learning”, or “expert systems”, or “theory provers”, or “language models”, or “computer vision systems” and so on, and so forth.
q.v. Tesler's Theorem, Ali Al-Khatib's definition of AI, Rua Wilson's definition of “artificial intelligence” as “a blanket term applied to any system that claims to supplement, reproduce, or replace human actions, decision making, or reasoning” (quoted here).
The tell-tale thing about AI is that it is often about the claims made by those hyping the tech, and it always has been. AI is the froth, the residue are the useful tools left behind.
But just because the froth leaves the useful residue behind doesn't mean that the froth didn't cause harm. And I think it's perfectly reasonable to call out that froth and its harms.
I'll happily vilify LLMs when they are proposed as the foundation of general artificial intelligence. It's the big selling point of the current "AI" bubble - "throw enough processing power at it and it will magically happen". And it is never, ever going to happen I'm very glad to say.
The reason for my optimism? A recent article in, I think, Futurism. LLMs do an amazing job of replicating the way in which humans process language, which is why they are so good at generating convincing bullshit. However research using CT scans has shown that completely different parts of the brain are involved for reasoning and for language, and individuals without language are able to reason.
So language and reasoning are two completely different things and no matter how much money and computing power you throw at the former, it can never achieve the latter.
However research using CT scans has shown that completely different parts of the brain are involved for reasoning and for language, and individuals without language are able to reason.
And I remember a case study from Oliver Sacks about a man who had Korsakoff's syndrome who had profound brain damage, and was thus unable to form or access near-term memories, yet he talked so volubly, so convincingly and with evident charm and engagement that his symptoms were not evident when conversing with him. He was thus able to talk, but not able to reason.
You can look it up in Sacks' book, “The Man Who Mistook His Wife for A Hat”, specifically Chapter 12, “A Matter of Identity”, about a patient named (in the book) William Thompson. You should be able to borrow a copy from the Internet Archive if you have an account, but every time I hear people who talk up LLMs as being intelligent, I remember the Revd. William Thompson and his desperate need to continuously create a world and his sense of identity via confabulation.
My distant recollection is that there were voting buttons for articles, at least for a time, but only until a certain columnist named along the lines of "Andrew O" suffered a voting event of a decidedly non-positive character. But perhaps I misremember the exact circumstances.
I think, maybe, what is needed is something similar to the venerable TCP/IP stack.
A defined and internationally agreed hierarchy of what runs at what level.
If AI has any place, it should be confined solely to Userspace - it has no place in the Kernal/OS of whatever species.
For those that don't recall or were too young, it was called ADVAPI.DLL, included in Windows 95/NT and later, and was responsible for certain cryptographic/security functions.
https://en.wikipedia.org/wiki/NSAKEY
http://news.bbc.co.uk/2/hi/science/nature/437967.stm
https://www.economicpolicyjournal.com/2013/06/how-nsa-access-was-built-into-windows.html
The purpose of putting AI into the OS is so that Microsoft can claim to be innovating. Their share price will go up because of the AI hype and managers who made the decision will get bigger bonuses.
The users of the OS, or any engineering concerns are not at all involved in these short-term calculations.
This is the reality. In search of constant growth, companies must "innovate". So, out come the fads. Crypto, metaverse and AI seem to be the current ones.
But, all evidence seems to say that while there are slithers of usefulness in each of them, on the whole they are investor bubbles, perpetuating themselves through false hype.
Companies obsess over share prices, so we end up with this nonsense.
'Microsoft is in the business of engineering' - one would like to think so, but the truth is 'Microsoft is in the business of making money for its shareholders and top executives', and the more cheaply that money is made the better. So, not really in the business of engineering, let alone engineering reliability, usability, and stability.
Sadly, we're witnessing the inevitability of senescence in anything that follows the basic pathway of life. MS may manage to reinvent itself enough to stagger on for a generation yet, but greed, complacency and inertia all take their heavy toll. The manglement is steadily driving the beast into the ground. Those in charge at the death will grab their golden parachutes and move on to the next agile, keen, young thing and begin their deadly parasitic ways all over again.
What's worse is insisting that a user-level application or feature is so intrinsic to the OS that it cannot be removed.
What's even worse, like the IE situation mentioned in the article, is intentionally making an application intrinsic to the OS so you can later argue that it cannot be removed when people start asking questions about your abuse of market dominance.
Internet Explorer, Windows Defender (W10 onwards), Teams (at least early W11 versions), and now copilot. This shouldn't surprise anyone.
They also tried with the MS Store, which is nearly impossible to get rid of without breaking the OS (the latest 25H2 finally has a policy, but Enterprise only)
I notice my MS365 homepage having a bigger and bigger copilot textbox, and it's taking more and more clicks to get to the place i need.
My prediction? They'll make Copilot an integral part of the UI, argue they cannot remove it, and probably get away with it too.
Having learned nothing from Clippy or Cortana.
Maybe they should just be banned from using anything in the Windows UI that starts with a C?
Perhaps they may benefit from a more real-world example, by learning the difference between a tool and a tradesperson.
AI can have uses as the former, but when it tries to be the latter, it's (and we are) doomed.
Do they understand how they are causing their products to be skin-crawlingly creepy, a couple of bad LLM in the OS or M365 actor headlines could have the politicians ranting for a company breakup?
The Microsoft management reacted with similar baffled expressions to the muted reactions when they presented the Kinect always connected mandatory IR camera + microphone at launch, living room face recognising, delivered with the Xbox One.
Or Musk's claim that the information all of Tesla's Optimus androids and Tesla cars will gather from walking around households will be worth trillions when "licensed" to xAI somehow without benefitting Tesla shareholders and certainly not customers.
> I notice my MS365 homepage having a bigger and bigger copilot textbox
It irritates me that previously when I logged into 365 via the browser, I actually went to a useful homepage, with Copilot, MS has decided the bland Copilot page shall be the first page served, I then have to click through to my homepage / “dashboard”.
This post has been deleted by its author
This post has been deleted by its author
This post has been deleted by its author
This post has been deleted by its author
. . . and in a mobile browser, now you don't even get that far. Instead you just get a message that to continue you need to install the MS 'Coprolite' app, before getting redirected to the app store's page for it. You have to use the browser's view desktop site option to be able get anywhere near the 365 homepage.
The agentic platform might (in an ideal world) act like a Type 2 hypervisor - situated between the OS and the active user desktop (and application suite) if it's turned on and active. That seems like a reasonable way to do it - transparently intercepting a user action and turning it into a series of integrated application and system calls. And like a hypervisor, it ought to allow multiple agents to run simultaneously, if that's the way Microsoft wants to structure their AI. If turned off, the active user desktop would run normally (natively) over Windows without AI. The user could also switch on the fly between agentic AI and non-agentic (no AI) desktop that way, too.
I figure that could improve reliability, useability and stability as well as respect user choice.
Otherwise, like any good Type 2 hypervisor, it should be completely removable.
Anyhow, it's all academic to me - I use Linux.
Dog's got to bark, a mule's got to bray
Soldiers must fight and preachеrs must pray
And children, I guess, must get their own way
The minute that you say no
Why did the kids put beans in their ears?
No one can hear with beans in their ears
After a while the reason appears
They did it 'cause we said no
... The song "Never Say No" from the musical "The Fantastiks" https://genius.com/Hugh-thomas-actor-and-william-larsen-actor-never-say-no-lyrics
Am I saying that tech management resembles petulant children with beans in their ears? Well, yes. I might be suggesting that.
I am on Linux so it does not matter to me, but this is going to be a popcorn moment.
If the Agentic OS is forced upon existing users, then will this slow down the PC due to the resources required by the AI ?
We all know that Microsoft will f*ck it up, one way or another, so it is going to be fun to see the fallout.
Maybe, just maybe, next year will be the year of Linux on the desktop....
My sister is forced to use W11 at work. She is also required to disable and avoid LLMs in every possible way for client privacy. If Microsoft make an LLM compulsory at the OS level she may well be required to upgrade to something without AI. Would your work place be happy to send every keystroke and mouse click to Microsoft where it will be used to train an AI? If they are, pick a few articles on the internet showing how to get training data back out of an LLM and see if they really like the idea of sending their trade secrets to competitors.
"She is also required to disable and avoid LLMs in every possible way for client privacy. If Microsoft make an LLM compulsory at the OS level she may well be required to upgrade to something without AI."
It depends on the wit of the employer. If Microsoft make some aspect of it compulsory they can simply argue it's impossible and provided she continues to get rid of what's possible she can't do more. Businesses can be very creative in avoiding long-overdue change.
That works swimmingly well until the first agentic disclosure of attorney-client or attorney-work-product or protected health information is involved and the lawyer or doctor is disbarred or has their medical license suspended for failure to safeguard client/patient information. That presents a sticky-wicket. It is the bar/medical board that imposes the safeguarding requirement and places the non-deligable duty on the attorney or physician. (at least in the US) So regardless of the how integral the agentic AI part of the OS becomes, or acquiescence of the employer, having information disclosed behind the scene by some part of the OS hangs around the professional's neck like a noose or guillotine frame.
Beyond the protected information aspect, system level hallucinations are as foreseeable as the recent article where google's new AI tool wiped the poor guys D:\ drive and then hallucinated a mea culpa. All the sorry in the world won't fix a wiped OS if the letter changes to "C". (but that does provide a perfect opportunity to be done with the M$ BS and simply load Linux...)
And all this is just the preamble to the "yet to be seen" disclosure of exactly what information the new system-level part of the OS is sending off to *3rd party servers* to enable the agentic operations to take place. And why the true need to make the feature system-level is likely to remove the user's control and ability to limit or disable the gold-mine of new personal information being phoned home under the guise of "Agentic OS" features. Loosing an individuals control over what information the OS shares would seem to have reached the point of absurdity with the proposed "Agentic OS".
All of it adds up to the most colossal "What Could Possibly Go Wrong?" I can imagine. Playing Russian-Roulette with client information on something as undefined and untested as an "agentic OS" isn't a position I'll allow myself to be put in.
I'm sure somewhere out there somebody wants an OS that will think for them and do it all behind the scenes. Personally, I just want an OS that does what I tell it to, and no more.
> My sister is forced to use W11 at work. She is also required to disable and avoid LLMs in every possible way for client privacy.
I would hope that at the Corporate level that there would be enough pushback that this would be one of these features that can be disabled by company policies.
Whilst this an excellent article in the main, there's a fundamental lack of understanding of the nature of Big Tech (I originally wrote that as "Bug Tech" which is an apposite Freudian slip) in the statement that "Microsoft is in the business of engineering".
Microsoft is very definitely NOT in the business of engineering. It could be argued that it was up until 1979 but since the 80s it has existed only to make money. Nothing wrong with that in a capitalist system but they're no different from all the others such as Apple, Meta and Google. Their foundation myths that they're somehow hippy types different from conventional corporates are just that - myths intended to distract attention from the fact that they are cash machines and we're the metaphorical cows being milked.
Thank-you! I was going to say similar.
An engineering company whose products were as unreliable would have been sued out of existence years ago.
Would you trust MS ethos to build a skyscraper or a bridge? I'd say the same about most software companies these days. Software engineering is generally now offshored cheapest bidder software.
> Microsoft is very definitely NOT in the business of engineering.
Indeed not. I thought it was odd the article's initial premise was the opposite, since it went on to cite examples of Microsoft _not_ being an engineering company, but rather being driven primarily (if not soley) by profit, marketing, hype, etc.
> [Microsoft are] no different from all the others such as Apple, Meta and Google
It seems somewhat common for tech companies founded by engineer types to start out focused on the tech and the product; when the engineers eventually tire of "running things", or the place becomes a victim of its own success, corporate executive types are brought in, the engineers are shuffled off to R&D or otherwise out of the say-so loop, and company focus shifts to (quarterly short-term) profit. Usually to the detriment of the product and ultimately, the customers.
Microsoft always only existed solely to make money. Always.
Bill Gates virtually stole every idea and positioned them to dominate the market, with ruthless backroom dealings, using any tactic legal or otherwise that he could get away. In many cases illegally as long as he knew they'd be penalized less than the profit. Certainly, Microsoft hired good Engineers and sometimes allowed them to create functional products but usually only as a byproduct of chasing the ideas of others. Microsoft has not truly sparked innovation in the world of IT - it has re-actively responded to virtually every major innovation seeking innovation not to advance any ideal but for profit and power alone. About the only original beneficial thing that Microsoft did was not tie it's Operating System to a single platform like Atari, Commodore, Apple, Radio Shack, Timex, etc. but even this was strategic to weaken IBM's hold on the market by turning hardware into a commodity.
If anything Microsoft has fettered IT innovation, to it's ability to profit and dominate the market. I would argue that this has set true innovation back decades, if not centuries. How many Makers have chosen not to pursue revolutionary ideas, knowing that Microsoft will simply use its market dominance, to steal and create pale mimics of their masterpieces? We will never know how more advanced not only our IT tools could be but also how advanced our sciences could be, if these Makers had simply not been driven into other pursuits by this ignorant villainy.
Windows is right to have a GUI subsystem at the OS level - including the widget set. That ensures a common API layer for GUI applications.
It's Linux that is wrong here, because it never grown outside the 1970s. And the reason because it's a bad platform for GUI application development. "Choice" here is no an advantage, is just an hindrance. Would you like to have a choice of many different plugs inside the same county, and enjoy the issues you get when you travel abroad and you need adapters?
An OS API for OS automation could be a part of the OS too - just like speech recognition and the like can be OS services. As long as the OS does what the user command it to do, and not Microsoft, Apple or even Torvalds...
Sorry, that is bollocks.
An OS can still have a tightly packaged GUI on top - think chromeOS(*) running over Linux, or Aqua on MacOS running over a unixish OS. Think Android, Iphones, etc.etc.etc
And your "1970's" comment is just ragebait. There are many many new things in modern unix systems that didn't exist in the 70's, 80's, 90's and early 2000s. It's just that the modular design of unix made it easier to integrate such features without needing to rewrite or significantly overhaul the whole system.
We do have different plugs in the country. USB-C, 6.35mm, 3.5mm and 2.5mm jacks, RCA connectors, HDMI, RJ11, RJ45 not just the "Type G" you are alluding to. Each one serves a different purpose.
But if we stick to power, you have the Type G, IEC 60309, BS4573 ("shaver" plugs) etc.
Your analogy of using different power-plugs for the *exact* same purpose doesn't fit.
GUIs don't belong in the kernel, or the base OS, period.
--
(*) Whilst ChromeOS does indeed run on Linux, and you can shutdown the whole of ChromeOS and still get a console login and shell etc. they did merge Chrome into the whole GUI - so you can't run the GUI without Chrome running. Even that has bitten them in the arse. They started to pull chrome out of ChromeOS with the "Lacros" project, but abandoned that when it was decided to focus on running the whole of chromeOS on an android base.
Not only buzzcocks but a pile of manure....and also a damn Commie plot (central control and planning)
With Linux having the GUI outside of the OS, I have freedom. Freedom to choose the GUI that aligns with how I want to work...it's look and feel, it's functionality.
Which in my case is Zorin and it's "mobile" desktop skin. To me, menu's to locate apps is stupid so I use a desktop that doesn't force me into this usage model.
Under your plan what you get is a desktop that you as an end user have no choice in... you are stuck with whatever the developers think is good, which may not actually be good. Look at all the angst with the Windows startmenu and taskbar among users...no choice in how they behave just whatever central planning thinks the end users need.
Your a Commie and I claim my £5!!!
Bluck
We'll take all the reliability, usability, and stability you've got, though.
If only there was a free, open source operating system that provided that.
It wouldn't have any crapware in it, it wouldn't spy on you and it wouldn't slurp all your sensitive data up and do who knows what with it.
It wouldn't bork your computer on every minor update either.
If only....
This post has been deleted by its author
If only there was a free, open source operating system that provided that.
Every computer I use (currently three desktops and three laptops) runs Linux ... specifically Linux Mint. It's an unstable pile of crap, but it's marginally less unstable and less of a pile of crap than Windows, which is why I use it. For now.
I ran Windows 11 Pro for years without an issue until recently. But the last Windows 11 Pro installation I did only survived two weeks before getting nuked by an attack, clobbering the partition tables of every single drive in my system, mounted or not. I lost some data this time, but it wasn't critical data - I hadn't even accessed it for actual use in two years! I'm just a pack-rat collector and was loath to free up the space, even though it was technically wasted space.
All my application data, program code, etc. was safely backed up, give or take a week or work and debugging. But the critical pieces of that work were in the backups, and most of the bugs were fixed before they were encountered again because they were all exactly the same single dumb mistake repeated from a code template with an error.
So I'm back on Debian 13, albeit with Debian 12 repos referenced for the video driver and CUDA stack. Although NVidia has a full CUDA stack repo for Debian 13, the key it uses is incompatible with any repository serving up the 580 version of the NVidia drivers, which enable DLSS under Steam Proton Experimental on Linux. I didn't want the Debian 13 CUDA stack relying on a driver installed via .bin file because the CUDA stack is closely tied to the display driver version and there was a good chance of ending up with an unbootable system with errors that I don't know how to recover from via command line without a functioning system to use for reading the documentation. Catch-22 if I let that happen, so I did what I could to get rid of the .bin driver without losing 580 for my games.
The resulting system is less stable than the .bin driver install had been, but that could be due to CUDA 13.0 installed on the box. Dot-0 releases are notorious for having issues, and I'm sure there will be a lot of patches and fixes in 13.1 soon enough.
Microsoft's code quality has gone to shit. There is no way a rogue program should be able to nuke partition tables unless Windows itself has been compromised, and with all their blather about TPM keys and everything to prevent just that kind of thing from happening, all I can say is: I'm not impressed. Linux has been, continues to be, and will for the foreseeable future be easier to maintain, patch, and keep up to date without getting nuked by rogue programs.
Go back to the drawing board, Satya. The only ones who want this "Artificial Ignorance" crap you're pushing is your marketing department and your stock-pumping "investment" arm.
It's been a while, but my main intro to getting really into the computer tech stuff was back around 1997, when I had added some RHL dual-boot to a Windows 3.11 system (on that great big whopping 540 MB HDD), and then set about upgrading to Windows 95. In a very similar vein to Groo's, that upgrade somehow reset the partition tables. I was stuck running off of floppies until I managed to find enough information about partition tables online to go in and edit the binary data (and dd it back into place, etc.) and actually managed to recover all the partitions (as a first-time Linux installer, I had put in way too many partitions for every purpose, even a separate /lib partition if you can believe that, and I'm still not entirely sure how that worked).
Doubly appropriate icon is appropriate -->
Ironically just run into a TPM forced issue with a game. That's right, until 2025, not needed TPM for the bios or OS.
Yet that well known war game, has now decreed thou must enable TPM or go elsewhere.
Why ?
What benefit has it to an end gamer user, other than slow down a system that was working fine.
Sure it was nigh on 15 years old, had done the 7 to 10 and kept on going.
More bloatware inside the OS, and with the threat of no more updates, and a cliff edge push to 11, it seems there's likely more going on under the hood of each new OS that MS chuck out, than actually benefits the user.
I have dipped a toe into Linux. Seems a fine time to take the plunge.
I for one would love to see the numbers for Windows LTSC and IOT versions activated over the last few years.
If Microsoft insists on shoving their AI crap into their standard OS rather than back up their managements jacksies, I imagine the long term channel support versions might rapidly become the platform of choice for those unfortunates chained to Windows.
Still you have to admire the awe inspiring imbecility of giving free range to an enormous unauditable software system that is arguably a hallucinating psychotic by including it in the most privileged software component.
An "agent" doing "agentic" stuff isn't too bad on its own, but if it's made part of the OS, it has to work, and still give you control, and it's no use having an agentic OS if we don't know its limits.
We still need to stay in control, and the problem really is the reliability. MS software is suffering from a terrible reliability problem at the moment (and all previous moments come to think of it) and if it's not reliable now, who the hell is going to trust it to do things automatically?!
Maybe the time is not there yet to put a verdict on this,
If AI allows talking to ones PC: Hey PC, order me for a 2007 BMW 325i manual German premium brand front brake rotors, front brake pads, oil filter, air filter, spark plugs and interior filter for under $ 600, and it comes up with a pop-up to confirm an order list from a car parts site, AI features have added value in making life easier.
The first one managing to package this into an integrated good working product that can also do word processing and watching youtube, might be successful.
Like with the early browser wars, MS wants to put such things as standard in Windows, since it gives them considerable market leverage until the EU commission starts spanking them with fines.
My OS is already not just an OS, and already far too much "not an OS" at it is.
It's why my next OS isn't going to be Windows.
Back to relive the days when I ran Slackware for 10 years on my main desktop because, quite literally, the other OS did not want to do what I require of an OS. Which is to offer a selection of my chosen applications, and then get the HELL out of my way.
This may be the straw to break the camel's back for me.
I use LLMs daily, to speed up tasks that they do well at (assistance writing quick Powershell scripts for example), but they are a tool that I can very much put away when I don't want to use them.
At home? I don't need it. I don't want it. It may be time for a switch to a different OS.
My only knowledge of an "Agentic OS" and its failings come from a 6th grade field trip in 1968: The HAL 9000 (the lead in the movie "2001").
While seemingly competent in meeting its goals, well at first anyway...
Hmmm if you want to turn off CoPilot perhaps covering the laptop camera would be a good idea (HAL could read lips).