But what...
... is an AI-PC? What defines it? A certain number of theoretically achievable TOPs? A certain architecture?
A significant cadre of computer users is waking up to the fact that Microsoft's first volley of Copilot+ machines – notebooks capable of local AI processing – simply aren't very good at a bog-standard use case. The Arm-powered devices throwing in the towel when it comes to the most popular video games wouldn't have been a …
It is a PC that guesses what it is you meant to do, and then gets it almost right.
Rather like the Nutrimatic drinks machine, which analyses your physiology, dietary requirements and current hydration level to determine exactly what drink you require, and then produces a cup of something almost, but not quite, completely unlike tea (you are a masochist on diet, right?).
I remember that ad well, given that I was a lad living in West London at the time.
Your post got me nostalgic for two reasons though; One being youthful memories of my educational years, but the other was for actually listening to ads on the radio. What a blast from the past!
I never do that these days as I almost exclusively listen to Radio Paradise . The music is far better than what is played on commercial radio these days (IMHO)
Hmmm I haven't watched UK TV since before 2000, yeah I am a computer Geek - UK TV gets the British Public to pay for a TV License to watch BIAS UK Gov Propaganda the the rest of the world can watch for free - work that logic out, I did and opted out of never watching BBC or any Live TV since - I feel much better for it and still know the difference between a mob being called far right racist thugs rather than simple people just fed up of the UK Gov giving foreign economic migrants all the perks that they can't get because there is no money and so Pensioners will have to go cold this winter missing out on a measly £300 heating allowance while certain MP are claiming £3,000 for 2nd home heating allowance and 2 Tier Stalin is getting back hander's at the trough like so many MP's while disabled persons are grilled when they have already been give a medical stating its for life and now they think a life long disability can miraculously be cured !
But DNA did also bring us a vision of hope:
"Bring out your dishwashers! Bring out your digital watches with the special snooze alarms! Bring out your TV Chess games! Bring out your Auto-gardener’s, Technoteachers, Love-O-Matics! Bring out your friendly household robots! Shove ‘em on the cart!"
Although I have my doubts that we will ever reach the level of artistic achievement that allows the nullification of Gravity; it would save us so much effort, even if all the lauch vehicles did have to look like Nutrimatic cups.
It is one with a Neural Processing Unit.
Just like a "Multimedia PC" was a computer with a sound card, and an "Internet PC" was one with a modem.
Most computers these days have sound cards, though if it doesn't, it can be easily handled with a £9 USB adapter.
Modems are obsolete now, but many computers have network cards, and if it doesn't, it can easily be handled with a £10 USB adapter.
> Modems are obsolete now
Alright, Miss Smug "I only use fibre"[1]. Some of us are still only on ADSL you know. That box in the corner may connect to the LAN and not just into one PC, but it is still a modem.
[1] although conversion from electrical to optical modulation still counts in my book; renaming it is marketing, not tech. Ooh, and there is radio modulation as well.
> or connected to the serial port or USB port
> you connect to via ethernet
Ah, so the *important* thing about the modem is whether you are connected via a direct 1:1 (serial) cable, by a 1:n network (USB) cable or by an n:n network (Ethernet) cable. Gotcha. And there I was thinking that the important thing was signal modulation.
I wonder if there is anything else that we can therefore declare "obsolete" because they are now (most often) on the n:n network - printers, perhaps?
> that's not what I'm talking about.
It is always good to be precise or you are just going to get people arguing with you.
Has to do a certain number of TOPS, have a certain amount of RAM, probably a few other things. And the most important, has to come with the latest version of Windows 11 which includes the "AI" Copilot software.
Microsoft's real failing here is making Copilot branding take precedence over whether it runs x86 or ARM. They want to pretend they are equal (probably because their contract with Qualcomm requires them to not differentiate or make Qualcomm ARM PCs seem "less" in any way) but when customers get them they will find they are not equal in all ways.
There has already been a lot of returns from buyers of Qualcomm PCs, and that's only going to increase with the AI/Copilot branding being used to push sales. Since 100% of Qualcomm PCs get that logo, but only a minority of Intel and AMD PCs even after the new stuff comes out - because they have tons of old stock to sell plus they are going to keep making older CPU SKUs for a while. Qualcomm didn't have any older stuff to keep on the price list at a discount like Intel and AMD do.
Back in the day we worried about FLOPS not TOPS because that is what computation is all about - FLoating Point Operations per Second.
Hallucinating AI wankery will only ever be good at consuming electricity and filling datacentres with gaming and CAD rigs, that are not used for either gaming or CAD.
It seems to me the one single job that 'AI' PCs would be good at is the one they will NEVER be allowed to take over.
Consider: 'AI' these days is simply taking a large corpus of buzzwords intermixed with filler words, and pasting them together to make statistically similar, but individually different combinations.
These aren't meant to replace workers who solve actual problems, they are the very definition of the corporate 'strategists' and 'visionaries'. The people with the power who, when they finally get it pounded into them what 'AI' is and who it really threatens, will ensure it vanishes with the wind.
To answer this I recently watched a Hardware Canucks video youtube lobbed at me and the first half was just him telling me how bad the Lenovo laptop was from a regular laptop perspective (shitty keys, erratic touchpad, paint peeling after only a few weeks etc..), testing out the "all day battery" that got him to about two in the arvo and then he started to cover the AI bits I was interested in, for all of thirty seconds as it covered four things, Recall which he can't review because it's been, well, recalled, an image generator which was just downright shitty compared to chatgpt and let's face it, they're downright shitty to start, the output looked like an n64 graphic.. then there was something about the webcam that he kinda glossed over and I've already forgotten the fourth thing since I watched this twelve hours ago...
I turned off at that point, as it appear "What makes an AI pc" is the marketing they put on the box to justify a few extra hundred bucks on a mediocre machine...
It as a logo someone (it may even light up) that says something like:
AI
Powered by AI
That it is all complete twaddle is irrelevant, it is about marketing and sales. A few years ago everything had to be "Smart", they were not, just a hacker's & data collector's wet dream.
AI is just "Smart" on steroids.
"the fastest, most intelligent Windows PCs ever built,"
It is like having the fastest hamster running in a wheel. The wheel may have frictionless bearings, be made of the highest quality unobtanium and be designed specifically for the hamster, but it is still going nowhere.
A recent 'Start the Week' on BBC Radio 4 covered AI and one of the presenters claimed that getting a general AI or LLM to perform tasks where there was a dedicated app, used 30 times the energy. One of her friends used ChatGPT as a calculator, which it can emulate, but why? Yes, I know that Alan Turing wanted a 'Universal Computing Machine' where the whole thing would be coded into software and it could do 'anything'. but bespoke hardware and optimised software for specific, well understood tasks are so much more efficient and quicker IRL.
I cannot think of anything I do on a computer that I would prefer to be done by an AI, that I would have to review and check. I even dislike auto-correct, heck, if I'm gong to publish typos I would like them to be my types*, not some AI generated guess typo.
*Edit - an actual auto-correct typographical error. The prosecution rests.
To clarify, the BBC Radio 4 programme was "The Artificial Human" and the episode's title was "How green is my AI?"
It was thought provoking as I drove to a meeting and had me thinking that people are going to get lazy and try and do everything with AI when it shouldn't be, just because it's the latest fashion.
The example of asking ChatGPT as a calculator is shocking as it:
- takes longer than just using a calculator on the device you're already using
- is so wasteful by a factor of 30 according to the programme
My personal view is that a high proportion generative AI is at best wasteful currently - "create me an image of a vulture, eating a burger whilst reading The Register".....
I just asked for that to prove the point - and it was quite an amusing image!
The example of asking ChatGPT as a calculator is shocking as it:
- takes longer than just using a calculator on the device you're already using
- is so wasteful by a factor of 30 according to the programme
You forgot:
- actually gives wrong answers (you can easily persuade a LLM than 2+2=5, for instance)
You don't need to persude it to get it wrong it gets it wrong by defaul. 2+2 does not need a statistical model to determine the probability of the likely answer. But that's what it seems to do.
From my experience (of doing a GPT4o for work) there is no actual intelligence in AI.
Intelligence would be to see the "question what is 2+2?", and to determine that is needs a basic numerical operation to carry out that task, and that any answer other than 4 is junk. Or if it's asked to produce a picture of a cat and that picture shows and animal with 67 legs it knows that it is junk. But it doesn't it knows nothing
Yes. But the latest LLMs use RAG-style. So this means you can use the LLM to do many app like things like journaling, auto shopping lists, auto to does, auto alerts management without you having to program it like we do now.
LLMs are mature tech. The application of the tech is just beginning. I rarely use any apps now. I have no need. I am curator.
Open your tiny little minds
I cannot think of anything I do on a computer that I would prefer to be done by an AI, that I would have to review and check.
Searching and summarizing large amounts of data are the examples I see the most. Internet searching, log scanning, summarizing business data.
Whether the results you get are reliable, that could be the question. And how much sensitive data will get leaked to 3rd party AI tools....
The whole point of a data search is that you need to be able to rely on the accuracy of the result. If AI cannot give you a >98% reliability it is worthless (OK, yes I did pick the number at random and it probably should be higher). This was an argument I had repeatedly with people that believed data quality was an optional extra as you could fix any errors in the final results, carefully ignoring the fact that you had no idea if the final result bore any relevance to the real world.
I was considered very 'old skool' because I would do random checks on the input data and then run searches a couple of different ways and then do random checks on 'hits' to ensure they should be hits. Took me longer but I could repeat my results
If AI cannot give you a >98% reliability it is worthless (OK, yes I did pick the number at random and it probably should be higher).
Having worked in business for decades, I can say with confidence that data summarized by humans would rarely pass your standard. Besides the GIGO, most summaries weigh strongly toward a confirmation bias.
While I question the reliability of the results from AI, I have little doubt that many will use them without any type of confirmation for accuracy. Some people don't care if there are six fingers. But even so they will probably be about as accurate as the human gathered results.
I have upvoted you because I once had someone working for me (about 1983) who would type 2 + 2 into a calculator and accept 5 as the correct answer.
What ever happened to the "that just doesn't feel right" feeling that you get when looking at the result of a calculation? You know the one "whilst I don't know the exact answer I know its not umpty zillion" feeling.
It's simply not common practice to do a mental approximation of the calculation first, and then say that number out loud (or write it down) before you hit the calculator.
Each step of the calculation is thrown through a calculator, and then the answers from each stage feed into other stages... accumulating errors and spurious precision at the same time.
The concept of "I should know what the answer should be before the calculator tells me, it's just taking the grunt work out of the precision elements" is apparently no longer taught :(
"The concept of "I should know what the answer should be before the calculator tells me, it's just taking the grunt work out of the precision elements" is apparently no longer taught :("
Estimation is still on the syllabus for Maths exams taken by school children at around age 16 in the UK. Link is to a revision site with a few questions and answers. The idea of checking mentally with rough numbers is introduced early on, like 8 or 9 years old, and then built on later.
The general lack of actual calculations or even measuring in other parts of the curriculum (e.g. science, craft/machine shop/cooking) and in life generally is probably the reason it doesn't stick much.
https://www.mathsgenie.co.uk/resources/3-estimationans.pdf
It is nice to be able to identify formulas that will magnify errors in the inputs from the terms in the formula (given some interval or bounds for each input e.g. x > 1 or P < 1 etc) but only confident maths students get that idea in my experience. The limitations of floating point representation of real numbers could be fun to teach as well, but that one flummoxes lots of people!
Reminded me of a time, 1990s, we were setting up a system for a leisure centre where all the tasks had previously been done manually. There was one financial report they produced for the council (which owned the place) which we were struggling with. The council said the final figures should be calculated one way. The centre said they did it a different way. The samples they provided didn't match either, and the spreadsheet they used had no formulae in it. Eventually, after multiple discussions, they admitted that they just made the figures up until they "looked right".
We shipped the report using the council specification, since they were the ones that commissioned us.
I had a question on a job application today....as a tester how have you used AI in your work.
As a tester I wouldn't dare use anything as ephemeral as AI in my work. Theres nothing I can do to verify what its doing is correct, and that it'll do the same thing next time. Im not that bold.
But I suppose if you had to cope with user input, then an LLM might be handy for creating a large number of plausibly-human like responses (ie which might also be slightly mangled or confused), as a sort of weak fuzzing...
> One of her friends used ChatGPT as a calculator
Reminds me of a stat I heard somewhere recently that 90+% of Alexa's are now just being used as radio alarm clocks. Amazon are still trying to come up with innovations on them which most users just don't care about any more
90+% of Alexa's are now just being used as radio alarm clocks
Yep, that's what I bought mine for (it was cheaper than a decent DAB radio alarm). It also lets me turn the bedside light on and off on voice command - saves stumbling from door to bed in the dark.
They're hardly critical use cases.
"They're hardly critical use cases."
Not for you, but for many people they are critical means of interacting with their environments - if you couldn't operate said bedside light switch, or couldn't cross a dark room, then those cases become important. Of course being able to operate a radio without having to read a screen or interact with tiny buttons is also a valid, and important, use case for a significant portion of the population...
The availability of accessibility aids to the general public means that they are at least an order of magnitude cheaper than they would be if supplied as accessibility aids.
Are we meant to use it for anything else? I suppose we use it as a radio (basically play BBC Radio six or Smoth Chill), an alarm./reminder for my teenage son who would forget which month it is and to control our Hue lights.
When it tries to do anything else (weather, amazon package deliveries) it's invariably useless or irritating.
One of the few genuinely really useful consumer machine-learning things I've seen is RTX Voice. It's noise-cancelling for your microphone (i.e. calls) that runs on your graphics card, and it's bloody good. Getting that working on more efficient, less general-purpose hardware would make it useful in laptops, where you're more likely to actually be doing such calls and where the probably rather high power consumption is less of an issue.
"AI", which isn't (machine learning is a much better term), really does have uses. Sadly the hype is absolutely awful and people (well, businesses) are going crazy about things no one wants or that aren't even specified - like the Copilot+ (why plus?) PCs.
"I cannot think of anything I do on a computer that I would prefer to be done by an AI, that I would have to review and check. I even dislike auto-correct, heck, if I'm gong to publish typos I would like them to be my types*, not some AI generated guess typo."
Bingo. Nailed it. Me too.
"Laptops are thermally compromised by design, the 'gaming laptop' is an expensive oxymoron sold to morons."
If you want a screen larger than 16", and an ethernet connection, then for a Dell, you have to pay more than £1k, and it is a gaming laptop (being the cheapest).
I use the laptop as my general computing device, including the odd programming, and they perform as needed, even when 10 years old.
My MSI laptops would disagree with you. They are spec'd as gaming laptops and have performed reasonably well. Although I wouldn't want to use one actually in my lap while gaming. The latest one I purchased to use in Unreal Engine for development.
Sure, I could build better in a desktop, but for my personal use the laptop had benefits. The biggest limitation is their lack of ability to upgrade components limits their lifespan.
The nicest thing about a laptop for this 'moron', I swapped out the memory, added a second SSD, plugged it in and turned it on and haven't had to fuck with it since. I've been building PCs for 35 years, sometimes it's nice to go the fast food route.
If your definition of 'shit' is a relative one, then yes, laptops are shit since they can never compete with with equivalently priced desktops.
In absolute terms however, laptops with enough grunt to play fun games at acceptable frame rates do exist. This is not a 'shit' experience, it's a known trade-off between convenience and computing power, likely made by an individual who isn't a moron. Oh, and they can be quite handy as Mobile Workstations too (provided you dodge two common real problems with gaming laptops, low res screens and teenage boy aesthetics)
The moron is the one who can't enjoy a good game because he's too busy thinking of how much better it would look on a bigger machine.
Hell, the Thinkpad I bought used (refurbished) from Newegg a few years back does just fine at playing the handful of games I want. Those are undemanding — the most graphically intensive is probably the Spyro trilogy, which were PS1/PS2 games originally — but then that's rather the point, isn't it? "Gaming" means different things to different people.
I don't see the appeal, myself. But then I also don't see the appeal of HD, much less 4K and its successors.
And lord knows I'd love it if I could force all sound to mono, with proper mixing. Stupid Dolby 5.1 encoding is everywhere, and with just the set's built-in speakers dialog is often drowned out by the SFX and incidental music that directors insist on cramming onto the sound track.
Now you will stuggle to buy a non "smart" TV. Alas, the 2 I have had in the past a no longer smart. One is so out of date that it would be a risk to use, the other, seems the manufacturer used such a shit processor that many of the app developers started to pull their apps from it.
Resolution = £30 4K roku stick - other brands are available
Ah, Roku, the firm that patented injecting advertisements into the stream. Yeah, no.
I expect in not so many years I'll just give up watching television entirely. There's some good content to be found, but it's just not worth it anymore.
You already do have AI bits and bobs on your phone. Google had a tensor flow chip a while back.
Looking beyond the hyperbolic 20th century-style marketing, having dedicated hardware helps a lot for local models.
Look beyond MS. A Linux-of LLM was tried and fell flat on its arse. Not going to be possible. With neural engines token rates increase a lot. And, guess what, there is a huge area of AI/SI that has nothing to do with LLM. You lot think AI equals LLM
It is pitiful to see such uniformed tabloid-red-top posts citing 3d tv or such as if that gives you sad never-beens a little respite from the fact that the IT world is dead and AI/SI rule.
At the very least understand that there are a thousand more interesting projects going on than anything the whores OpenAI and Google (not Denis) have sh1tted out on humanity.
I’m tempted to buy the reg and ip ban 90pc of the commenturds
,......"You lot think AI equals LLM"...."
Rather more that most of us do not see anything that we currently have as remotely approaching something that could be considered as 'Artificial Intelligence'.
It is pitiful that some people like you are deluded/un-knowledgeable enough to think that it does :(
@nobody who ever mattered
You are right I am un-knowledgeable(sic).
Understanding that intelligence can take many weird and wonderful forms that are often beyond the imagination and scope of mere mortals is a challenge.
Pity thyself, for I have seen and modelled levels of academic/social intelligence on here that are barely above Western average.
And in coda, ‘delusional’ and illusional and all the other “-als’. All of them. The complete spectrum. From that synthetic generation we are able to create recursive loops with LLMs that trigger paradox walls.
I wish I knew what it all meant! Can you help.
Transformer models are significantly different from SLP networks, in quite a few ways. Claiming they're the same thing is a vapid argument, frankly. Even deep convolutional stacks are very different from SLPs (or other single-layer networks, such as single-layer RNNs or CNNs or SAMs or what have you), and transformers are quite a bit different from deep convolutional stacks.
I am well on the record here for disliking LLMs and gen-AI in general, and for questioning the AI/GAI claims of its fans. But ignoring major technical details and dismissing the research does that side of the argument no favors; it just shows that argument from ignorance is always possible.
My, what impressively foolish and turgid prose you produce.
Do you actually know anything about transformers, ANNs, other ML models? It certainly doesn't show.
Of course it's true that the standard Reg commentariat line of "it's not intelligence" is vapid flag-waving — I've yet to see anyone making that claim illuminate it with a usable definition of "intelligence", and few show any familiarity with the research in transformer (or diffusion) models, or even much understanding of ANN stacks. But neither are "I use LLMs and they're great" nor "you're soaking in it" persuasive arguments; they're barely arguments at all.
And, in fact, most smartphones do not have a Google Tensor chip — that's a proprietary SoC that only appeared with the Pixel 6. Most Android phones are not made by Google, and a shocking 0% of iPhones are. And the TPU in the first couple of generations of the Tensor SoC was not impressive; Google didn't even start making "AI" claims about the Tensor SoC until G3 in the Pixel 8. (And their claim for the G3 was "run more than twice as many ... models", which is both underwhelming and amusingly vague.)
The Tensor Soc's TPU is suitable, and used, for running relatively small models for things like text-to-speech and speech-to-text, basic still image processing, background removal for live video, and so on. It's not doing inference on a billion-parameter transformer. There are technical arguments (though I've yet to see a terribly good one) for calling frontier LLMs "AI", but those do not apply to little embedded TPUs.
Debating that. I like mint, I can get a couple of routines to give me OneDrive, but no sky sports.
ChomeOS (many videos on how to get ChromeOS and not FLEX installed so you get play), works and so does Sky. Then you get a limitation on Android OneDrive - again another app can resolve.
ChromeOS fle management is a bit naff.
I could do a mix. Mint on most with the basic apps, and ChromeOS where needed, but having a standard interface on all machines is appealing when SWMBO is using them.
I will test the water with a test laptop over the next little while and wait until W10 is dead.
Another happy recycled Dell laptop user here. They seem to work quite well as Linux machines, even if the current version of Windows is a bit much for them.
My travel laptop is a Latitude 7480 (2016), RAM maxed out and NVMe running Mint. Very zippy and quite light. Easily carried in a backpack with bag-o-cables and charger. Lo-res screen, but with my eyesight, it's probably for the best!
Currently playing with an $80 Latitude E5520, which I am using as a FreePBX VOIP server. Runs Debian 12 and BIOS update (from v04 to v14) was done last night, after figuring out how to create a DOS-bootable USB key (the answer was FreeDos and Balena Etcher)! To Dell's credit, the BIOS update for this 2011-vintage machine was easily downloaded from their website, and installed without a hitch. I boosted the RAM to 8GB ($11) and installed the top end processor (i7-2640M $20). Not bad for a 13 year old laptop from Goodwill.
I did exactly that about 18 months ago. I bought a refurbished Dell Latitude laptop from a seller with good ratings on Amazon. It's a decently spec'ed business class machine (16 GB RAM), that set me back around $200. I replaced the tired battery and installed a faster NVMe for another $50 or so.
It's running Linux Mint like a champ, and still gets periodic BIOS updates. I've got Steam installed with a few older titles like Serious Sam and Half Life, which run more than acceptably. (I'm not a big gamer, so I haven't tried anything more resource intensive.)
The oldest Linux Mint PC I have running is an AMD Athlon 64 machine with 2GB of RAM from 2007 (!). I don't use it on a daily basis, of course, but it's there when I need it.
The one I use daily is an Intel Core i7-2600 from 2010 or so with 4GB of RAM. I replaced the HDD with a Samsung SSD (1TB) and it doing so has made it blazingly fast. The best way to speed up any PC or laptop is to replace the HDD with a SSD.
Once these tulips have withered and died, and AI PCs are offloaded to the recyclers these embarrassments might be good value for the likes of computer algebra, theorem proving, signal processing, image analysis, or photorealistic animation etc etc.
Of course removing Windows and any other Microsoft contamination from the hardware might pose an insurmountable obstacle.
The problem is that most Windows software ever built is for x86 series processors, so running them through emulation cannot ever be anywhere near as fast as running them on bare metal. Especially when you're using RISC chips to emulate CISC chips - not a very intelligent or efficient architecture.
Mind you, that's an academic point when the emulation can't even run all programs correctly. There is a big chance that these things could be a total flop in the market, as they'll still be competing against native x86 machines.
They aren't a like for like replacement.
There are a lot of people who dont use their computer for gaming, who will have no issues at all. Thost people tend to quite like the idea of a battery that is still good the following day. I believe that is the only selling point people actually care about, but it is compelling.
> RISC chips to emulate CISC chips
I thought that most modern CISC chips are RISC at the very core with another processor sub-system that turns the CISC commands into RISC ones..just very quickly at a hardware level. They can even patch the microcode they run hence all the patches for the heartbleed etc when they were discovered.
Surely Microsoft are trying sell stuff that 99.99% of the population don't need. They are trying to find a market for AI that just doesn't exist.
I think we need to rebrand AI as Artificial Ignorance, that will stop this AI nonsense. Every other company CEO is trying to get on the AI band wagon.
A "new" version of an operating system noone really wants.
Supplied with an internet browser noone really wants.
Which will only run on PCs with a new spec that none really wants,
"featuring" AI capability that noone really wants.
Now exposed with game (not) playing capability noone really wants.
And quality tested in a way noone really wants.
Does anyone want it?
Microsoft persuaded hardware manufacturers (and customers) that Windows Vista would work just fine on machines with 512MB RAM. They lied.
This is the same sort of thing. Microsoft telling us that Windows on AI-ready platforms is the fastest ever is an outright lie. Yeah, it'll be fast for Microsoft products, and AI stuff, and their marketing machine will probably try and persuade us that the reason apps don't work well (it won't be just games) is that they're not using Microsoft software. I'm just wondering how well printer drivers will work actually. How slow will they be?
I'm not denigrating AI, that's not the point of what I'm writing. I'm simply saying that once again, they haven't thought it through from the point of view of the end user. I'm sure the AI stuff will be useful, but - as is common with Microsoft - not in the way we actually want. It'll be in the way that they expect everyone to work.
In a way it's sort of necessary. We need to shake off this reliance on the backward-looking processor architecture that is X86. Apple managed to do it. It's just going to be much harder with the sheer amount of software available for Windows.
"once again, they haven't thought it through from the point of view of the end user"
When have M$ EVER done that?!
They still think an operating system is what people drool over. FFS it's just an opsys, the definition of which is that it shouldn't get in the way of what the user really needs to do.
"FFS it's just an opsys..."
...and it shouldn't be *that* hard to get right. $DEITY knows, Redmond has had plenty of time to perfect their flagship OS.
Having used most of the versions of Windows, and Linux since the late 90s, it's my impression that Linux runs faster than Windows. Just the OS part...Linux seems to do the "OS stuff"...file management, launching apps, window manager...faster and with less lag, than Windows. I do not know if this is actually true, but I do know that my experience with Windows 10 (and to a lesser extent, 11), on the latest and greatest hardware that my employer can provide, is less than impressive, when compared to my home Mint 21 system, running on a 2016 motherboard.
(and the trend does not seem to be positive...but perhaps MS's priorities are not to produce the perfect OS, but to monetise the "default OS" to the best of their ability?)
Earlier this year, Forrester Research said the platform still lacks a "killer app" that would make any AI PC an essential business tool.
And that's a problem. Until you can persuade the beancounters that the extra money is worth it, they're not going to pony out for a shiny Copilot+ PC when they've already got a long term lease deal supplying Dells, HPs or Lenovos that probably has an early cancellation penalty.
Windows Recall was a disaster, and apart from a built-in chatbot that might do some work for you – poorly – what is there that vanilla PCs don't already have?
And what's to stop someone installing a chatbot that's not from Microsoft on a PC that doesn't have Copilot? Oh wait, did we just sail into antitrust waters or am I just hallucinating like a generative AI?
Most of these crappy systems will be given away to foreign nations as educational aid or buried in a desert. MS will lose a stash of cash. All of this AI bollocks will become an opt-in extra on W12. Until then, buy older kit that works better.
The NPU is the MCA bus for the 21st century.
Clippy AI's time is numbered.
Daisy, Daisy...
I'm waiting to see who blinks first on this.
MS have said that AI PCs must have an NPU rated at 40+ TOPS. While Intel have fallen into line, AMD have shipped their latest and greatest having an NPU only on the laptop processors. They have stated that desktop machines will have a discrete GPU, which will shit all over any NPU when it comes to running AI tasks, so there's no point decorating their dies with a shitload of redundant transistors.
AMD are, of course, correct. The snag is that, as things stand, MS will only allow AI tasks to run on an NPU because ${bullshit_and_waffle}.
Popcorn please.
There's lots of other Windows stuff that doesn't work well on Arm.
Many devices don't have Arm drivers, and the chances of getting a driver for any non-new hardware are tiny.
Similarly many anti-virus and Virtual machine hosts don't work.
I'd give it at least 2 years before switching to Windows on Arm.
Welcome to Microsoft's latest bullshit and hype cycle.
Rarely has Microsoft ever specced hardware properly - every version of Windows flat out sucks if you buy a minimum spec machine.
Despite the hype, these machines are the minimum to run their models. Shitty performance is to be expected!
I’ve used one. Treacle. Spent 4 hours ripping it to bare OS bones and still not much faster than stock. It isn’t being used properly as MS have half arsed it again.
It killed the battery life and seems to be running all the time no matter what hacks I did to kill it. It can’t be switched off.
hyperthymesia
And in over 100 posts of questionable quality at times, we find our diamond shining on.
Mr Dick was the Jesus of our kind. Justt read about his intuitive intelligence and his 1977 ish we are living in a computer speech. Praise be his name, Philip K Dick.
Yes, sythentic irntelligence offers the best bridge to that. There is no way a logic based system can integrate with a monster like our brains (well some of us anyway)
And it will be joyous , brothers and sisters, ladies and gentlemen, boys and girls, cause the big shop is open and it’s a wonderful world.
SI will work with our brains
Considering Microsoft now own Activision Blizzard plus a host of other games studios they acquired over the years, the blame for a lack of Windows on ARM compatible games can be laid some what at their own feet.
As even if none MS own studios aren't interested in making native ARM ports, there is no reason why MS couldn't have been telling all their own games devs to be ensuring ARM ports of their games in preparation for these Snapdragon X laptops being launched.
Imagine if they did it properly and ported the lot. First few would be shaking but after 10 or so it will be running like clockwork.
I would consider one then. Just put some heart and love into ARM or is it cause they are British kind of and we beat down your boyfriend intel with their might-as-well-have-valves -in-it sh1tbox old-school CPU. Ha ha ha . Ha ha. Ha hah hah ha. Rather sell it to SoftBank ( who if you have ever lived in Japan will know are fondly thought of as a business) than you jackel spawn.
Well... duh.
As much as I agree with AI PCs being a load of nonsense currently, anyone who bought an ARM-powered Windows machine for gaming simply did not do proper research. It's not like this is the first, or even the second time people got burned by Microsoft's ARM offerings.
A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of reparative mathematics - typically for accounting and stock / order processing.
Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born - we had entered the age of the Digital Computer.
These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results.
Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power - so much so that they often didn't need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable "laptop" computers were everywhere.
We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24x7, usually more powerful than their desktop counterparts (or at least with faster storage and networking).
Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised - we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age.
Fast forward a few years and all of a sudden we need huge data-halls filled with "Rack scale" machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct....
Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing.
Remind me again why we are here?
(time lines and huge swaves of history simply ignored to make an attempted comic point - this is intended to make a point and not be scholarly work)