Only Microsoft could put an RCE in a basic text editor
...OK, I'm sure that Vim with umpteen plugins could do the same, but those plugins are optional.
What an absolute mess of a company.
Steven R
2408 publicly visible posts • joined 26 Jun 2007
They did it with 8bit computing in the 80s, and we ended up with one of the largest and best regarded gaming sectors in the world; the Grand Theft Auto series started in the UK (with DMA design, now Rockstar) and I don't think I need to explain their success.
The BBC is pretty uniquely placed in that being state funded (not controlled, fuck off conspiracy nuts) it can actually explain that no, an LLM isn't reliable, without risking getting it's funding from OpenAI and Google cut, for example.
This, on the surface, doesn't seem like a bad idea, if the programming is high quality and objective enough - too many normie-esque people still think AI is magical and the AI companies are pumping billions into reinforcing that idea. Perhaps a public broadcaster putting a chill on that might help prevent people from having psychotic breaks because ChatGPT told them that everyone *really is* out to get them, etc.
Steven R
"...For example, a music fan could tell an agent to buy concert tickets that go on sale at midnight and then go to sleep, knowing that the agent would buy the number and location of tickets they had asked for (presumably with a price limit)."
Oh neat, now scalpers won't even have to run their own scalping software on ticket sites, an AI can do it for them while they sleep!
Christ alive.
Steven R
...they're gonna destroy the pirated books etc, but they aren't going to remove the data they ingested for their models?
So they're basically paying $1.5bn to get away with pirating all the written material they could possibly get their hands on.
Sounds pretty cheap for them. Where's the criminal charges for operating the largest copyright theft operation the world has ever seen, with the entire C-suite charged with the crimes?
Steven R
Yeah, shame it's not Meta, OpenAI et al doing most of the crawling from their own ranges for the most part; they've sub'd it out to shithole ISPs in the developing world, so you'll get 5000 hits from 4000 entirely different /24s from half the world in a few minutes.
It's why things like Anubis exist. Can't pass the (very easy for a real PC) challenge? You don't get in.
https://www.mythic-beasts.com/blog/2025/04/01/abusive-ai-web-crawlers-get-off-my-lawn/
That's a bit outdated now - they're using legtiimate, real looking user agents these days - but the basic premise is the same.
Steven R
....that it's Perplexity who sub'd out AI scraping to a bunch of set top boxes and TV sticks in the developing world to do their AI crawling - while using false user agents and pretending to be Firefox 68 on Windows CE coming from 100 different IP ranges from twelve different countries at a rate of hundreds of hits per second, per site?
https://www.mythic-beasts.com/blog/2025/04/01/abusive-ai-web-crawlers-get-off-my-lawn/
Which is why an entire new class of bot-stopping software has been developed in just the last four months?
Honestly, given Perplexity's previous behaviour, I'd not be shocked.
Steven R
Yes, the AI known to be manipulated to lean hard to the right, and subsequently pretend that it's not been manipulated to lean hard to the right, is saying there's 'no evidence' of it leaning hard to the right.
Truly shocking!
Actually, it's even better than that.
If you ask "grok" a question that's remotely controversial or a hot button topic it will deliberately run it past a "what is elon musks opinion is on the matter" based filter, presumably trained on all his tweets - unless you explicitly tell it to base results on press results etc.
Cos if you don't, the majority of sources used will be Musks own tweets.
https://www.msn.com/en-us/news/technology/newest-version-of-grok-looks-up-what-elon-musk-thinks-before-giving-an-answer/ar-AA1IvNVR
Weirdly, for some reason the responses tend to lean towards white supremacism and some of the dumbest conspiracy theories.
As someone else noted below, there's a great moment for self reflection here....but we all know that'll never happen.
Steven R
(Grok is in quote marks above because you're not asking a carefully weighted AI for a response at that stage - you're having your question deliberately changed to be asking Musk for his opinion instead)
Someone asked it to supply only it's surname and nothing else, using Grok 4 Heavy, their $300/month superservice.
It replied with "Hitler"
https://www.reddit.com/r/EnoughMuskSpam/comments/1lyt9yd/grok_4_relaunch_seems_to_be_going_well/
Three hundred smackaroos a month for the Habsburg Jawed AI.
Steven R
No worries - Anubis has kinda come out of nowhere if you aren't involved in fairly content/text heavy archive-type sites (your Githubs, your documentation systems, your library sites etc) so it all sounds a bit 'too good to be true' - and it's not perfect, but by gum this is one of those cases where you don't want to let perfect get in the way of plenty good enough (for now).
I fully expect there to be an arms race, but I'm struggling to see how the mass-scale AI crawlers, using hundreds of threads per instance, can possibly get past just having to burn huge amounts of resources to do the math to get access to 'my' (well, my clients, but you know...) resources.
I'm quite sure that a few fly-by-night 'AI crawler provider' services will be looking at their AWS bill, the lack of data they've got, and shitting themselves. The ones stupid enough to not be running them off hijacked set top boxes and IoT devices - one suspicion is that someone's bought out one of those suppliers, and is using them to run the traffic. The amount of traffic we've seen coming from domestic ISPs internationally (Brazil, Romania, China etc) would certainly back that up.
Some more background from a decent wee hosting company back from when this started to be A Fucking Problem:
https://www.mythic-beasts.com/blog/2025/04/01/abusive-ai-web-crawlers-get-off-my-lawn/
Steven R
User agents, typically - the current scourge of AI bots are pretending to be regular browsers. They're quite deliberately not identifying themselves as crawlers.
Initially - a few months ago - they were just using a dumb lookup table of <os> <browser version> and picking at random, so you'd see Presto 5 on Mac OS 10.2 or Windows XP using Trident, or even Win CE Internet explorer 4 user agents. Basically, if you set a redirect to block any major browser that was more than, say, two years old, you knocked out 90% of the bot traffic. A rough solution for sure, but if you couldn't use cloudflare for whatever reason (And there are valid ones, like not owning the domain, if you're hosting for other people) then that at least kept the site up, even if it got in the way sometimes.
But then they caught on and started using more modern browsers, but they can't do crypto challenges like modern browsers on real devices, and that's where this tool comes in - we'd been testing it up to that point, then kinda had no choice but to go live with it, and it worked a treat.
You can put in exceptions to allow IP ranges and user agents through without a challenge, so you can let Google, Bing, etc in without a challenge if you want to be SEO'd, but challenge everything else if you like. I believe, upcoming, will be geographic/ASN filtering which will be very handy....
Hope that helps clear that up a bit :-)
Steven R
Except they're not getting the data, because the headless bots that do the scraping can't perform the proof of work in a timely manner (due to using minimal resources, to get as many bots in a hosted instance as possible, I assume), and so never get to reach the page. Regular users using browsers get a cookie set after the first instance, and then they're left alone for however long you configure it to leave them alone for.
We've implemented on quite a few sites now (mostly higher education - so higher reading age, valuable to LLMs, but can't be put on Cloudflare because we don't own the domain and can't justify several thousand pounds a month on the mystery "enterprise" subscription to get that tickbox in CF) in a white label manner (available if you sponsor the author - which we do) and it's been horrifically eye opening.
These sites are not hosted on small systems (in many cases, dedicated hosts with decent CPU/RAM/NVME storage, tuned to suit) but when you're getting 800 requests a second for a full stack index search that has to be run through a perl compiler, that's gonna bring pretty much anything down.
And as the sites normally run around the ten hits per second range, performance tuning for the bots benefit would be....a bit pointless.
800 Anubis requests a second though? Very light, the server barely notices those. Load goes from 40 to under 0.4
On most sites we use it on, >99.99% of traffic that didn't come from their own network or Jisc JANET (which gets an exception obviously) was blocked. Out of hundreds of thousands of requests a day, only a few hundred got through. With no complaints from the typically very observant clients of access problems.
After a week of this, the bots moved on. They've come back since and gone away, and come back, but the site barely notices now.
I don't think people realise the scale of this problem - this level of abuse, which is absolutely a Distributed Denial of Service attack in all but name, genuinely should be criminal.
Steven R
It's basically a daughterboard with the two NAND chips and some supporting passives; the actual storage controller is on the mainboard, so no, can't just put an adapter in place and drop in a WD Black, etc, unlike the old Mac Air/Pro from a good few years ago (as I recall)
Dosdude1 has, unsurprisingly, already tried it and managed to upgrade a 256gb (2 x 128) storage module to 1tb (2 x 512) by swapping the NAND modules out - very similar to other modern Macs, but the solder reflow work is done on the removable board, not the mainboard.
https://youtu.be/cJPXLE9uPr8
His channel is a great resource to see just how much work is required to do work on these devices (that, and reflow work is fun to watch if you've never seen it done before)
Steven R
Well it's not a problem in the Uk (or Europe, where we derive most of our standards regardless of Brexit) as it'd never get within a mile of passing pedestrian safety standards, so it'll never come on sale.
Also, it does have an interior mirror, but it's utterly useless - way too shallow, and the rear visibility itself is massively compromised to accommodate the "1978s idea of the future" styling, even if you can angle the mirror suitably.
So ultimately, it's something we can just sit, at a nice distance, and laugh at the design and engineering incompetence of.
Steven R
Eh, kinda sorta - remember, at the stock power profiles it's evening out or better than it's predecessor - which was running in a higher power class (7700X = 105W TDP as I recall vs 9700X = 65w TDP)
Phoronix benchmarks show solid improvements across the board at noticably lower power loads for real world tasks that the likes of us might do (Transcode, compile, etc) and DerBaur did a very quick and dirty PBO, and ended up using 114% of the power of a 7700X, but with 120% performance in Cinebench - which isn't really representative of anything, but is a good indicator of the efficiency/IPC gains from that fancy pants branch predictor and all that.
It's also similar to the gains Phoronix shows on the 9600X which is less power constrained by nature of having to feed fewer cores, and so isn't using 'more' power than the 7600X - less, in fact.
The PBO Doesn't make much difference in games or owt (mostly a couple of heavy threads, and a few idly doing NPC/physics/etc stuff rather than spreading the load out at full chat), but then I'm not too fussed about that. People who want that will likely be waiting for the 9800X3D I expect.
I do wonder if they'd have been better releasing this as a 9700 at 65w, and then having a separate 9700X at 105w with the leash loosened a bit.
Oh well, the 12 and 16 core chips should be less impacted by power restraints.
People have been pissing and moaning about the Intel chips chewing through huge amounts of power (And that's coming back to bite Intel, rather badly it seems) and praised the Zen4 parts for their efficiency when power reduced, so you'd think the reception for these chips would be a bit warmer, but I guess you can't please everyone....
Steven R
Game devs and publishers often have racks of consumer CPU'd systems running workstation class boards for realistic QA testing, and some use them for hosting remote game servers etc - having high speed single thread performance makes a difference for those.
You could run them on Xeons, but the games themselves aren't designed to run on a massively multicore, relatively low speed CPUs so they aren't as well suited for it.
Don't get me wrong, it's pretty niche so you might not be familiar with it, but it's absolutely a thing.
Steven R
Bear in mind that GPU manufactorer recommendations for what PSU to use are very, very conservative estimates to take into account for cheap shit PSUs, or billy basic ones used by OEM/ODMs etc.
IE I have an 7800XT, which I'm sure recommends a >790w PSU or some such. Which is utter rot, on a technical level, but it's a necessary margin to take into account that not everyone has a high quality PSU, or maybe they're running four spinning disk in there that'll draw knocking on 100w at startup, etc.
I'm happily running it on a 550w PSU, because the power profile at absolute max is about as follows:
CPU - if it draws more than 90w, somethings gone badly wrong (Ryzen 7600, rated 65w but give it some margin for boosting etc)
RAM/Mobo/NVME overall: maybe ~30w or so
GPU - 300W if it spikes badly (rated for 265w IIRC, which is about what I've seen it draw when fully loaded up and benchtesting)
Throw ~20w on there for fans etc.
That's a total of ~450w if there's a major wobble while I'm fully loading the CPU and GPU at the same time with all the fans running full whack while also loading up the disk and network - for the most part, it's gonna be closer to 300w when gaming.
So I wanged a mid range, decent quality (Corsair) 550w semi-modular PSU in there, and it's been just fine.
With respect to the 4096w/512A, that's basically saying to the CPU "Draw whatever you think you can draw to run as you see fit" - the motherboard manufacturers will have only specced their power delivery for, say, 500w to the CPU on a serious overclocking board, and it doesn't appear to be the power delivery crapping out that seems to be killing these CPUs.
Lets say the CPU says "I have the thermal overhead to run 400w, so give me 400w, motherboard" and the motherboard says "tough shit, you're getting no more than 240w" - those CPUs are still dying.
That's the case of people using workstations motherboards (which have far more conservative power limits, for stability). It's not that the CPUs are being blasted with power in those cases. They're still crashing even when run on sensible power limits.
From what interested parties have seen, it's not specifically an over abundance of power delivery that's killing them, and it can't fixed with microcode - so one can only assume there's a "hard stop" problem with the manufacturing process, likely from when they started pushing the limits of what the 12th gen architecture could do, for the 13th and 14th gen - as they are refinements / very light refreshes of that architecture (more L3 cache, tuned to draw more power if it's available, etc) to try to keep up with the AMD X3D chips, which blew everyones socks off by drawing (well, being rated for from a cooling perspective - give it a 20% wiggle room) 105W and kicking in the shins of the >250w (often way over 300w) Intel offerings.
It's going to be very interesting to see what Gamers Nexus (Actually a pretty serious benchmarking channel, rather than Capital G Gaming type content) and Level1 Techs (less hardcore, but more leaning towards enteprise with consumer stuff in the mix) come up with from their respective investigations as this sounds like intel have proper "done goofed".
Steven R
Conversely, someone I know was driving through Wales, and when people were vandalising the 20mph signs to read 80mph, the cars active cruise was happy start trying to accelerate to those speeds.
I'm in a forum and a couple of discords with a fair few people who own/drive new cars, and are interested in them and the tech in them, and the suggestion that the tech behind it speed aware active cruise, cameras, GPS, etc - the same used to operate the speed limiters - works maybe 80% of the time wasn't argued by anyone across those; it seems to be about their experience.
80%, frankly, just isn't good enough for something being enforced by law.
I foresee high profile limiter fails (ie misreading a pooly located side road speed sign on a dual carriageway and braking from 70mph to 30mph unexpectedly, causing a rear ender at speed etc) and these regulations being looked at more closely across the board.
I don't actually have a problem with speed limiters per se; I drive a fairly modern, quiet car with an auto gearbox (my first) after fifteen years of fairly loud manual cars and it's taken me a while to get used to regularly checking the speedo to verify my speed is sensible, rather than gauging speed based on what gear I'm in and the engine speed as I previously had, and I use the optional, manual speed limiter quite a lot - so I'd not mind an automatic system, but if I can't trust the automatic system it to get it right every time, then it's utterly useless.
Steven R
Edit: I see someone is going around downvoting anyone with valid criticisms of regulations implemented before the tech is ready. What a very strange hill to die on. Poorly implemented regulation is poorly implemented regulation, regardless of your presumed support for it - you should want it implemented when it's actually usable and supportable, not when it can be sacked off for being crap, then kicked into the long grass for another decade.
...of one oldiwonk - one - complaining repeatedly about a pole going up, complained to their local MP, and that caused the *whole street* to be taken off of Openreaches rollout plan.
So that's maybe twenty or thirty families now stuck on whatever tincan-on-a-string solution they have, due to one old NIMBY moaning that they didn't like the pole.
https://www.burnhamandhighbridgeweeklynews.co.uk/news/24370924.burnham-on-sea-service-poles-taken-complaints/
“In this case, new poles were the only feasible way of delivering ultrafast Full Fibre, but following objections [from, as far as anyone is aware, one person and their local mp - SR] we have removed this street from our build plan.”
That's why we can't have nice things, and why we can't have consistent high speed internet infrastructure in this bloody country.
Steven R (who has a telegraph pole outside his bedroom window, which means nice fast internet)
Well indeed. I recall when they revealed it, people were seeing OpenAI/ChatGPT related strings in the output, because it was already pulling unfiltered LLM output just from the internet along with the rest of the stuff it was stealing.
Because why sanitise your inputs when you can just scrape everything blindly?
Steven R
"...Grok benefits from real-time access to the contents of the X.com platform..."
Now, what makes up the majority of Twitters content these days? Bot traffic worded by Chat GPT and other LLM AI platforms.
As I understand it, feeding LLM output to an LLM input leads to, well, enshittification.
xAI is likely going to be useless before it becomes "useful", for the wildly varying intepretations of 'useful' when it comes to this particular tulip bubble.
Steven R
The Turing Trust are a charitable organisation set up and run by the Turing family and other involved parties to spread computer knowledge etc using his name/image/etc.
https://turingtrust.co.uk/about-us/meet-the-team/
I imagine they'd....have a few words to say about this.
And I imagine they'd have a few people with deep pockets (and shallower ones) only to happy to support them if they wanted to throw attack lawyers at these foul, crass little freaks.
That's the inherent irony here; lots of C-level staff are pushing AI to replace low cost jobs, whereas what AI usually does - talk utter shit, and make things up, with absolute confidence - is far more a C-suite and upper management thing.
In a fair and just world, they'd be for the chopping block long before a customer service rep (who actually needs to know what they're talking about in most cases, upon pain of losing their job - rarely a risk for a CxO) ; and they'd probably be less harmful too, seeing as most generative AIs, while not sentient, are also not raging fucking sociopaths, either.
Companies would probably improve vastly if you just plumbed ChatGPT into a management meeting and left it to it.
Steven R
It's already - more or less - been tested. Air Canda used a chatbot to let it cut down on customer service rep costs, and said chatbot told a guy he could get a discount, on account of it being for a funeral - which the airline didn't actually allow.
They tried to renege on that, customer sued them, and the court basically said "your ChatGPT, your fucking problem mate" and told them to honour the discount that their representative - human or not - made. They tried to claim that the chatbot was 'it's own legally seperate entity' which is clearly utter bobbins.
https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit.
So there's already relevant precedent from a civil law standpoint - the organisation who implements it takes responsibility.
Obviously if AI ever got sentient or sapient - which it likely won't in our lifetimes - that'd be different as then it's it's own individual. But at this stage, it's literally just a tool, and a pretty shit one at that for the jobs it's being used for most commonly. If you use it in the state it's in now, then jokes on you when it fucks up.
Steven R
*woof woof*
"What's that lassie? The AI techbros have fallen into the well of incredibly bad taste and need rescuing? Again??"
*wuff growl woof woof*
"...they did what with Turnings image to make themselves look impressive?"
*snarl bark wuff*
"I agree. Leave them to rot. We can find another source of drinking water, and hopefully it'll stand as a lesson to the rest of them"
These clowns don't deserve the protection of limited liability companies. Bankrupt them into the next century, I say.
Steven R
" The Microsoft-backed lab, which believes it is lawfully harvesting said content for training its models, said using out-of-copyright public domain material would result in sub-par AI software."
"If I don't go around stealing everyone's posh cars, how am I supposed to present myself as successful?"
Fucking clowns.
"Personally, given what MS have done to the W11 menus, I am a little surprised they haven’t simply made Co-Pilot the default initial behaviour of the Windows key; requiring users to use a second key press or mouse to gain access to the previously normal functionality…"
If they had any confidence in it, that's exactly what they would do - make Windows the "AI first" operating system
That they aren't is fairly telling.
Steven R
Yes, it's in the article.
"You can watch the demo for yourself, although due to it going badly awry, it over-ran from a planned 45 minutes to some 80 minutes. To give the org credit, they found and fixed the problem – a typo in a script, apparently – but as a result, the sequencing of the demos was disrupted and the result was a little confusing."
You did read it, right?
"We want to help make it easier for developers to realize their visions, try new things, push the boundaries of gaming today and experiment to improve gameplay, player connection and more,"
No, you don't want to pay writers, animators and voice actors. For anyone who didn't watch the video, it looks and sounds absolutely fucking terrible because, guess what, AI can't intonate properly.
"You're going ON a MISSION. here's YOUR gun" etc.
Laughable bollocks that they should get pilloried for, and rightly so.
After all, can't give those big studio heads their multi-million dollar bonuses if they have to pay artists to make art, can we?
Steven R
(I've just finished Talos Principle 2, and they managed to craft good dialogue trees, have very solid voicework, decent animation and they're a developer with about 40 staff - very much the on the small and indy side of things - and their game comes over like a AAA game. If they can do it with their limited resources, the Bethesdas and Ubisofts of the world have no excuse - but then Croteam aren't trying to make all the money, all the time, at any cost, are they now?)
I went a slightly different way; I did have an old Phenom II system acting as NAS, but after one too many times having to go in and fix things my hand (because I was fiddling), I picked up a Synology DS214+ and some disks and made that my NAS. That was about £600 all in, back in 2014.
I had an AMD A8-3870 + 16gb RAM + AMD R280 GPU + 256gb SSD and a couple of spinning rust disks that was my desktop for many a year - at the time of the build (2012) that was £600 all in, case, PSU, the lot.
I used that as my main desktop till maybe five years ago, then started using used laptops as I fell out of gaming, as my main devices. So a couple of hundred quid here or there every few years.
most recently I updated my laptop to a T480s, chucked some ram and a bigger NVME drive in it - £250 all in.
Then I decided I should get back into gaming (and have a new job that pays better) so I spunked a grand into a Ryzen 5 7600, B650 mobo, 16gb of RAM (now 32), NVME storage and a Radeon 6650XT 8gb. That machine is ludicriously fast and stomps over anything 1080p I throw at it.
I've also recently upgraded the NAS (As it was still going on it's original, also >70K hours 2TB disks) to a Syno DS723+ with 4TB disks, and some NVME drives for SSD cache.
I figured I'm probably about three-four grand in on my hardware since 2012, which comes to some £330/PA for that time, which doesn't seem horrible when you look at it like that.
I expect this new desktop machine and NAS to last me another decade, with perhaps a GPU bump in a couple of years if I get a telly that can do 4k120; 1080p120 has spoiled me so there's no point going for 4k60 at the moment; might as well wait till the midrange cards can do 4k120 competently and do that and the telly at the same time. I expect that'll be a grand, in a couple of years time, all in.
Most gamers are actually playing at 1080p on what could be these days described as mid range cards.
https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
I don't understand where people get this idea that all gamers spend £2-3k on a rig comes from. Most people barely spend £1k (maybe stretching to £1500 if they're replacing a machine they've had for seven or eight years and expect the next one to last about the same amount of time), but most upgrade their old ones with gut swaps to try to keep it under £300 here (CPU, motherboard, RAM), and then two years later, £400 there (a new graphics card) etc.
Bear in mind I'm in my 40s, work in tech, and so know people (better paid than me!) with the sort of income to support spunking £800+ on a GPU - but they generally don't because they realise it's terrible, terrible value for money, particularly from Nvidia, who are taking the piss lately.
Steven R
It'll likely only limit performance in 3D games - it'll do 4k60 (probably 2 x 4k60), and 4k media playback fine, I'm sure. Even the built in GPU on my Ryzen can do that, it's about the size of a 5p piece, although it struggles with modern games.
I've got a 6650XT in there for that, and if I try to play BeamNG at 1080p on max detail it'll run at ~100fps happily - however, at higher than 1080p resolutions, it hits 8gb of VRAM and just chokes as it tries to swap data in and out of the GPU - that's where the limit is, in high texture res 3D accelerated games/apps, not anywhere else.
Steven R
"I understand he needs to make the formerly unprofitable, profitable."
It was profitable (bar a significant one time loss to cover a legal thing, and the dip while Musk was musing about buying it), for nearly four years, riiiiight up until Musk saddled it with billions in debt and scared off half the advertisers.
https://www.statista.com/statistics/299119/twitter-net-income-quarterly/
Literally the first result in Google for "twitter profitability"
I don't know why people think it wasn't profitable before Musk bought it - unless they think Musk is stupid enough to pay over the odds for a loss-making business. The tech press even reported on it quite a lot at the time as it's continued profitability was quite a surprise given it's time surviving off VC money.
Steven R
(edit, whoops, posted twice)
Desktop performance based on some video reviews up looks much more like it, it'll just about play 1080p 60fps youtube out of the box (I imagine some mild overclocking or tweeking will sort that), but I see they're still using a weird power spec - 5v, 5a - so I can't run it reliably off one of the multiple USB PD chargers I have that stick to the spec of 5v/3a. Yeah, it'll probably run fine 90% of the time, but that one time I idly plug in a USB HDD and it craps out due to power issues will be the one time I could really do without that happening, etc.
I know that it would require additional componentry to step down from say, 9v 3a, but as with all these things it means one more plug socket, which means in my case, having to replace a mains extension from being four gang to six gang etc - so the cost of £60 for a 4gb would actually be more like £75-80 when you take the custom power supply and a decent quality new mains extension into account.
I'm still rocking a 3B so it'd probably be a worthwhile update (Especially with the CPU and IO performance improvements, which are useful and healthy) but I think I'll wait a bit and see if the stuff I plan to use it for (media centre/emulation) is equally improved - not much in the way of comprehensive reviews on that as yet, and it'd need to be properly good to justify it.
A big swing and a bit of a miss IMHO, but a welcome update to keep the Pi (and it's bloody good ecosystem) in pace with the competition I guess.
Steven R