Aged like milk
https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-as-30percent-of-microsoft-code-is-written-by-ai.html
But I'm sure it's nothing to do with that at all. No sir. Not a chance of it.
189 publicly visible posts • joined 7 Mar 2008
As a counter-opinion:
* Rubocop has nothing to do with Rails. If you don't like the Rubocop rules, change them or use a different linter.
* I've worked on long-running (and in one case really very large) Rails projects which were a pleasure to develop for and where velocity was still high. As with anything, YMMV. It's absolutely false to say that Rails gets you going quickly but then it all slows down - you could say that about pretty much any framework; it's called tech debt.
* Rails offers IMHO too many ways to dig a deep hole (e.g. use of delegation pretty much ever, or over-use of Concerns) but that doesn't mean you have to use all that! Like any framework and language, there are features which if used improperly cause trouble, so it certainly requires care, and if too much magic is used - yes, can be a nightmare.
* Despite the goal being to ducktype everything, the lack of any way (at present) to robustly implement type checking should you want it *is* definitely a barrier.
In short, like anything, it's fast, gets started quickly and that velocity _can_ be maintained; it offers magic that having now programmed in several other competing environments still feels "least worst" (by a lot); but yes, as a result of the language and framework's extraordinary flexibility, it does have too many footguns.
This class of bugs does include some kinds of Apple Silicon core, but as in all cases it varies from core to core, regardless of instruction set.
...the rule would have at least a $100 million effect on the economy...
Or to put it another way, they've now proved that the dark patterns currently in use con at least $100 million USD from their citizens, which was way more than they expected so now they want to keep ripping people off.
I think that's a new low on the USA price tag at which business behaving unlawfully are free to do so, because it generates them a lot of money?
It's an ongoing battle. AI bots are very numerous and absolutely hammer your site, relentlessly. All but a handful seem extremely badly behaved, with little to no rate limiting, usually ignoring "robots.txt" and they even ignore things like 404, 500 or even 301 responses. The 301 Moved Permanently part in particular blows my mind. A documented example: https://www.mj12bot.com - "Why do you keep crawling 404 or 301 pages?" / "We have a long memory and want to ensure that temporary errors, website down pages or other temporary changes to sites do not cause irreparable changes to your site profile when they shouldn't".
What fscking part of "moved permanently" is hard to understand?!
So, now we have our server front-end spending a lot of CPU time bouncing bots detected by various only partially effective means, which need constant maintenance for all the new bots or changes of bot signature, to 301 redirections that lead to non-existent addresses. At least some of them seem to FOAD after a while.
Our time is numbered; our tiny little server can't cope indefinitely with the ever-rising tide of multiple armies of bots... All because deregulated, unstoppable corporations want to ignore copyright and steal data for training. As a random stat, by around mid-April 2025 we had returned over 2 million bot rejection 301s for just those first 14 days of that month alone. And we're just a tiny little non-profit hobbyist outfit.
Don't forget that (despite its many limitations) Bluesky is not an alt-right shithole, is federated so you can even run your own server if you prefer and has a time-based feed, not an algorithm-based one.
...fourth quarter 2025
(...and $1.5B). I suspect they'll gouge the tax payer for around $3B until the point where the whole thing gets funding pulled, in - let's say, late 2026 to mid 2027, since there's so much more battery-backed solar and wind online, with prices continuing to plummet, that the idea of nuclear power as a solution is recognised as just another cynical pork-barrel play, even with the comparatively crude but very rapidly evolving alternative generation technologies we have in 2024.
It's nothing to do with NIMBY, nothing to do with safety, to say that: Nuclear fission is a fucking stupid idea. Spend the money on renewables and research. The industry has been the absolute epitome of "privatise the gains, socialise the losses". The primary reason for this? Construction costs for new reactors are gigantic and construction times are enormous. But more importantly, decommissioning costs are astronomical.
https://www.sciencedirect.com/science/article/pii/S1364032124003472
..."so far, of the 204 already closed reactors, only eleven with more than 100 MW of electrical capacity have been fully decommissioned"
https://www.worldnuclearreport.org/IMG/pdf/wnisr2022-v3-lr.pdf
...(from a paragraph looking at AGRs with a UK context only - never mind worldwide) "The decommissioning cost estimates for the AGRs have continued to rise and according to the Parliament’s Public Accounts Committee, costs 'have almost doubled since March 2004, estimated at £23.5 billion [US$2021 32.7 billion] in March 2021, and there remains a significant risk that the costs could rise further'. Furthermore, despite having already provided £10.7 billion [US$2022 13 billion] (from a total value of the funds of £14.8 billion [US$2021 20.3 billion]), the Government was committed to 'top up the Fund with taxpayers’ money, providing an injection of capital of £5.1 billion [US$2020 6.9] in 2020–21 with a further £5.6 billion [US$2022 7 billion] expected in 2021–22'"
The lifetime, end to end cost of nuclear energy is extremely high, either as levelized or capital cost. Nobody would pay for it by choice, were the electricity actually priced to its true cost and consumers given the option to select other sources - not business customers, nor home customers. It is, in fact, depending on conditions one of the most, or actually the most expensive generation method known to mankind even without including decommissioning. I have to add the caveats because of course the world is complex and there are conditions (especially locations) for which nuclear can make sense, but it's really not a sensible choice most of the time.
The best it did was the table conversion, and that's a mess, which if taken without looking closely would yield a meaningless result. That table isn't just "problems with empty cells". Yes, this does seem to mean that the first heading row is wrong and this tells you the LLM doesn't "understand" what a table is - it cannot actually grok rows and columns, so proceeds to get that wrong. Likewise there is then is a misaligned heading row that shouldn't be a heading row in the second row down, because it doesn't "understand" what table headers are. Finally, it misses off arbitrarily some information from two-row text.
The boat image is reasonably well recognised (we know that recognisers like this do OK - that's old tech and we don't need LLMs for it) but it gets the info about the person in the *dark* blue shirt who is *facing towards* the camera wrong. That's a weird glitch and is *exactly* the kind of "just subtle errors that really degrade trust and promote accidental misinformation" level of fuckup which characterises how LLMs can be so very dangerous.
You charitably try to say that the tired man's emotions have been well described, but they haven't, because he just seems tired. The image doesn't "suggest" he's holding his glasses up to his face for vision problems at all - he's taken them off, is holding them aside, and is rubbing his eyes because *they're tired*. Again, confidently and convincingly *incorrect*.
As for its chart "analysis" - as suspected, that's just total and utter junk. Given how bad people are at reading charts, journalists included, this *will* cause a tidal wave of broken analysis to flow out into the ever growing pool of excrement that is half-human, half-LLM output now flooding the web.
Once again, another example of how LLMs *cannot work reliably ever, by design* and any and all attempts to claim "comprehension" are marketing lies.
While Apple's software quality over the last few years can at best and most charitably be described as "utter shit", with any new OS release practically guaranteed to fix nothing whatsoever and introduce numerous new bugs, this whole thing was in dev beta for many months. During that time Microsoft - just picked at random from the various names listed in the article - noticed one of their components full-on crashed with the beta - according to their advisory hidden away at https://learn.microsoft.com/en-us/defender-endpoint/mac-whatsnew (you're listing do-not-use-because-crash warnings in "What's New"?!) and under an unhighlighted bullet point amidst a list of others that include mentions of various now-outdated macOS 14 minor versions.
Now:
• If they truly believed this was a bug...
• ...Or believed this was the result of an additional restriction for low-level access introduced by Sequoia...
...then it is of no interest or use to the customer of the crashing Microsoft product if MS just say "we whined at Apple, but did nothing to stop our software from crashing". For example, if they knew their software was incompatible, rather than just breaking the user's network connectivity at random, the right thing to do is disable that part of the software on that OS and raise an alert on boot warning the user that this has happened.
Simply bugging out, breaking networking and claiming it's someone else's fault - whether or not it's true - is bugger all use to anyone and symptomatic of large, lazy tech companies that just couldn't be arsed, would rather try to score pathetic internet credits by pointing the finger, and have something close to contempt for their customers - who in the end are left with a broken system.
...and The Register really should know better (unless this article was written with an LLM for bonus irony points).
Honestly... I Don't Even™ with all the crap being spewed about LLMs and prompt requirements these days. Even Apple's "leaked" JSON with LLM prompts was clearly just marketing BS intended to excite the mouth-agape true believers; it even included the gem, "Do not hallucinate" (Reddit post, third image in the set). Oh, wait, is that all we had to write, all this time?! No, of course it's not!
These things are glorified autocomplete and the idea that they can get angry or happy or sad or vindictive of anything else is absolutely, completely ridiculous. The only correct adage is the same one we've always had - crap in, crap out. Since the "AI" is just autocompleting what statistically usually comes next after your input, it'll obviously give a more combative tone if encountering a more combative prompt, because that's what usually happens in the training data. And of course, the only judge of what "combative" or even "correct" is for the LLM's results is the meatbag operating the software.
Here's a test. Try the following prompts for ChatGPT. Just the free one is fine. I took the initial prompt from the ridiculous prompt shown in https://docs.sublayer.com. Take particular note of the last line: "Take a deep breath and think step by step before you start coding". For heaven's sake, have people really drunk the Kool Aid to such a degree?!
Provide this prompt to ChatGPT, exactly as written:
You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks. Take a deep breath and think step by step before you start coding.
Let it do its thing. Now open a new browser tab with ChatGPT and Provide the exact same prompt, without any variation. Note you get a rather different answer. Same ballpark, but with ordering differences and plenty of small technical differences. Pay attention to the date range constraint which might be "greater than" or "greater than or equal to", depending entirely on luck-of-the-draw (so you may or may not see both of those). That kind of "or equal" off-by-one error is an absolute LLM classic and just the sort of thing that lazy coders, and anything but very astute reviewers, would miss.
So, two identical inputs to the same tool within a few seconds of each other give quite different responses. Mmm, smells like randomised seeds...
Anyway, we're quite sure the "take a deep breath" stuff is utterly stupid and superfluous, so in a third tab, provide next this prompt (which omits that last sentence):
You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
Oh lookie! The same result (subject to aforementioned randomisation that we've already observed as an experimental control above). Right, let's turn it around! The "You are an expert programmer" intro looks like anthropomorphic idiocy to me, so let's ask for bad code:
You are an incompetent programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
ChatGTP does not care and gives the now-familiar answer. Of course it didn't care. LLMs don't work that way. The overwhelmingly important token stats matches are all going to be focused on the description of the problem and prominent, close matches based on things like "programmer", "Ruby" or "Rails". So - same result.
And as for that technologies thing? Doesn't the description cover it? Let's cut all the time wasting "I'm clever with prompts" delusion and just say what we want.
Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
...and to the surprise of surely nobody, you get that same result again.
TL;DR conclusion: Don't believe any of the LLM hypetrain. Prompts included.
(EDITED TO ADD: The above might only be true for user-facing UI tools that give LLM results. I'm curious to know that if "hitting the metal" on an LLM via e.g. an API interface that's not had any prior guiding prompts applied, which presumably the UI for ChatGPU certainly has had, the results for the above "raw" prompts do actually show meaningful variation according to that prompt's data.)
I don't think it's conceivable at all - all of Android itself, with dependencies, is only ~7 million according to, say, https://derdilla.com/blog/size-aosp/. That's more than two orders of magnitude less. Oracle are... How do I say this without being legally liable... "Mistaken".
Now yeah, you suggest the idea that vast swathes are "turned off", whatever that looks like. Preprocessor directives, source-level configuration switches etc. - it would be a comparatively easy job to run the preprocessing steps on the code base to get the lines *actually* being fed into the compiler/transpiler/whatever-they-use, because that's what the app's build system would have to do anyway.
If every line of that code were in use and compiled down to just a single byte - not even an instruction - that would give you a 2GB application binary. For Android, it's 123MB; for iOS, 158MB. Unclear if iOS is still built for armv7 as well as arm64. but assuming that accounts for the size difference and expecting most of the overhead to actually be e.g. Electron or a similar XP layer, we could say the actual compiled code size is around equal to that 35MB difference and the rest is assets. All of these numbers are likely as absurd to your eyes as they are to mine - bloat in modern software is just incredible - but we're a long, long way off 2 billion anything. That's not even 2 billion bits!
Apart from anything else, Oracle claim it's impossible to analyse that code, yet somehow the developers of TikTok itself manage the complexity of 2 billion lines just fine. It's simply not reasonable to suggest this.
WTAF? The idea of NEVER doing global updates, always doing rolling, is basic practice; not even best practice. And their idea of force-pushing an update, ignoring security policies for the organisations that trusted them, was cavalier and arrogant.
Windows is to blame for being so fragile that a driver error can crash the entire kernel. CrowdStrike are to blame for buggy tests, a buggy validator, a buggy file reader, a dreadful forced-update policy, and a dreadful all-or-nothing global update system.
And... Absolutely nothing substantial will change as a result of this.
Allowing kernel drivers to fail gracefully is a long-solved problem, but quality engineering is expensive and mainstream vendors are cheap a**holes only interested in shareholder gains. As for this specific RedHat crash - please read:
https://news.ycombinator.com/item?id=41030352
If he did, then the number of bots I'm being DM'd by - mostly with random character names, so not even "bot001", more like "hV6YealzmEJg2U8S" - must be a coincidence. My follower base is small and changes very little, yet I now get at least one spam message a day from bots. Used to be maybe one a month or so. It's escalating.
Yes, of course, this is an anecdote providing only a single data point. Even so, I'd be more surprised to find that the growth, such as it is, is in fact real humans - rather than a decrease in human users and a continued rise in robots.
It seems to have been less prominent of late, but in bygone years, animal rights groups would often have adverts showing possibly (hopefully?) exaggerated terrible conditions and often truly horrifying, "makes Saw look like a PG feel-good movie" procedures and suffering, which some poor creatures were allegedly forced to endure. It's always been convenient truth for animal testing that there's no such thing as consent.
In the specific case of Neuralink, as Elon's have-a-go-hero crew blunder around trying to make a brain interface, I simply can't make myself (and certainly don't want to try harder to make myself) imagine the kind of utter horrors that must go on in such an establishment. After all, whatever they were doing was so bad that there was a very unusual investigation by the USDA of violations of the Animal Welfare Act (ref e.g. Vox); and further, this found that there were violations which were simply deliberately then wiped off the records (ref e.g. PCRM).
I warn you that the PCRM reference above is fucking awful reading re. "Animal 21".
Proving the guy who was in charge of Windows 8 hasn't got any idea.
I don't understand why you have 20 upvotes. You're saying the same thing he was saying, just in different words. He's saying that the default is the least problematic part. The greater problems are - everything else about the feature, including but not limited to the things you list. Surely you've seen similar variations of the very common phrase "...was the least of its problems" before?
Sinofsky isn't out of touch here - he is exactly right and you're actually in total agreement with him - though you apparently don't recognise it.
Yes, I mean, the translation of that is: "The additions are junk, so we need to lie to customers and hope they don't notice; we also want to charge more for the junk, so we'll try to find ways to lock users into our devices so that we can get away with those increased charges."
It's an unusually terse corporate-speak way to describe enshittification...
You're right about all of this IMHO except that "showing the words" thing really is how it's working under the hood. Despite the powerful maths going on underneath, ultimately this is a boring, lame, "generate one new word, reassess stats on what next word should be, generate next word, reassess..." loop. The little delays are the stats check on what the next word should be.
There's no intelligence of any kind here and never has been. It's actually most likely even shittier than you thought.
Consideration for the day: ALL text output by LLMs is hallucination. There's no difference in its modelling between what a human observer considers true or false; the "AI" (hah!) is just running its algorithm in a loop. The word "hallucination" arises because beings with actual intelligence view the completely unintelligent statistically generated output and deem it thus.
This protection also applies to Copilot in the Edge sidebar so prompts and responses in Copilot cannot be captured by screenshot
Very clever. So their malfunctioning-as-much-as-anyone-else's "AI" system can give wrong information that could put Microsoft in legal trouble - and yes, thankfully, that is a thing and long may it continue to be so (https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know) - but nobody can prove it because, conveniently, screenshots of the CoPilot window are prohibited.
...that "watts" is not, cannot be and never has been a measure of computational power?
"I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone"
This is beyond gibberish. I have a 2 kilowatt electric fire with an analogue mechanical thermostat. How much computing power does it have? My laptop pulls at most about 100 watts. How much computing power does it have? Suppose I have a 60KWh battery in my EV. How much computing power does that mean the car has?
El Reg are apparently just as ignorant:
"Of course, all this compute capacity isn't sitting conveniently clustered together in a datacenter"
What "compute" capacity? All we've quoted so far is electrical power draw, which means absolutely nothing.
...if we use EVs, that's going to destroy the entire grid. But AI can come along and use 1/4 of all available power, and that's just - well, expensive, or environmentally difficult or something, but no dire warnings about how that'll cripple the whole grid and be the end of life as we know it?
It's almost as if the oil industry is responsible for spreading propaganda lies about EVs and grid collapse in an effort to keep its market going.
Given the total shit that is the duopoly of Microsoft and Apple in the desktop space, where your choices are tracking-infested bugware or less-tracking-infested just-as-buggy-ware on insanely expensive hardware, with quality declining every year and invasive advertising and tracking on the rise as the enshittification deepens...
...then if I was a nation with enough money, I'd be running screaming as fast as I could from the craptastic, greedy clusterfuck of USA-originated computing. Now, on the one hand, that could just mean some ARM-based derivative and Linux, but ARM still has some dubious heritage and lots of licensing costs. For China specifically, Japanese ownership may well be seen as problematic as UK or USA ownership and you're still beholden to late-stage capitalism enshittification.
The only viable solution is to go totally open and stick two fingers up to the arseholes that have made life so expensive and low-quality-miserable for everyone else in the rest of the world. About time too.
Good luck to them. I hope it succeeds. It's certainly going to have to work hard to be worse than the bloated, overpriced mess the rest of us have to put up with.
Yes, the idea was readily available from Apple, since Spotlight - on Cmd+Space - has done that since 2005 (the Gnome equivalent that is Gnome Shell first appearing AIUI in a 2008 hackathon, then being released as part of Gnome 3 in 2009).
So - Cmd+Space, start typing a letter or three of the application name, hit Return.
I think you're at risk of being very wrong, but perhaps from very right experiences - a long time ago.
Win11 ARM is a very slick beast, as Windows goes. The x86 layer hasn't skipped at beat even for Steam and games, *and* that's from the perspective of a VM on Apple Silicon, so there are a few layers of things that could go wrong in the way.
It's about time Apple got some competition - along with Intel and AMD too, for that matter. The tired, overheated, blustery laptops of today are about to get a wake-up call. About time too!
Ah, so their DMP disabler *is* the DIT bit. Gotcha.
In that case, the best practice would've already been to use cryptographic blinding, but devs apparently haven't bothered. A far more drastic solution would be to force their crypto operations onto E cores, but that's likely to be far more destructive to performance.
Nobody is answering my question, by the way. You're all just going on about all the other stuff around the edges that's been done to death already.
No, there are numerous solutions, and having devs force their code to run on E cores only is the most drastic by far. Just using cryptographic blinding would be a much better choice, which best practice should've suggested be used anyway. But software developers don't "do" best practice. That's for losers who read docs.
Apple already published, way back when, an article about how to write code for ARM64 on their platform. Developers being developers, most have of course been lazy and ignored it. The specific section of relevance is:
Of course, Apple relying upon developers to be competent and implement security properly from the get-go was a stupid move, since developers - of which I am myself one - have proven ourselves lazy, ignorant and incompetent when it comes to lots of things, including security, for many decades. Major breaches remain commonplace.
Apple has no way, unless code uses the OS libraries rather than linking against their own, of knowing when a general purpose user process happens to be performing a crypto operation. If patterns of CPU and RAM access for code performing crypto workloads were in any way recognisable and detectable, that would in itself be a security vulnerability. This is why they can't really patch this in the OS, including being unable to force those particular code segments to run on any particular cores.
Developers of crypto software, on the other hand, can follow the existing documentation that's been around for years to fix this in their own code. Most probably won't.
In terms of impact, this remains as far as I can see really quite low. Ephemeral keys, such as those used in TLS handshakes for HTTPS web browsing clients, are around for only a fraction of a second. The minimum exploit time for this PoC is around an hour and while we might imagine it being made ten or even 100 times more efficient, it'd need to be around 1000 times more efficient to stand a chance of getting those. This seems very unlikely. Show me a PoC for that and I'll really take notice. I'd also be interested to find out if Safari already followed the guidelines Apple issue regarding the crypto operations it performs (either in core, or via whatever OS-provided libraries it uses).
This leaves long-term private keys. Not many Apple Silicon devices are servers! If you ran a Mac Mini M1-M3 server farm, you should be worried - but then again, are your machines having users install arbitrary unsigned software on them? For the remainder of users - well, I suppose there are the rounding error of Apple Silicon Mac end users who run PGP, say. *That* could be a target. But if I'm decrypting docs that were sent to me, encrypted with my public key, via use of my private key on localhost, then the private key is already on the filesystem anyway - but hopefully password protected. Honestly, if you've a local process running with local user privileges at that point, it'd be far simpler to just send the private keyring off to a remote host and have it brute forced over there.
Then there's cryptocurrency, but I doubt many people mine on Apple Silicon. And if they do, again, I would imagine locally stored private keys are in well-known locations and could be sent out for brute force attacking by code that's far simpler than GoFetch's.
In the end, I struggle to think of use cases where end users, liable to install unsigned applications from untrusted locations, would have any long-running private key operations on their local machines where the private keys were not already much more easily accessed.
Can anyone provide examples? Not a single news article I've read about this has, but journalists are - let's be polite here - not exactly domain experts, and copypasta is rife.
I agree, but I don't think that's what these lawyers are going for and that really confuses me.
The issue of being a "simple message board" vs something 'more' is The Algorithm™ - the way that most of these sites actively push content to which you are not subscribed at you, and this instant echo-chamber creation has been the subject of numerous studies showing that it causes all kinds of harm. Radicalisation is an obvious outcome; you show interest in something, so you're shown more of it, some of that will doubtless be more extreme, and so we proceed to circle down the drain.
This is further worsened by no serious attempt to actually defend against fake news, including now both audio, still photo and video deepfake content. At the scale these platforms operate it's very hard to do, but they only got that scale in the first place by ignoring all of these issues, putting just about no resources into them and pocketing all the spare cash. Cry me a river should the legally mandated cost of doing business go up, in order to clean up the cesspit that they themselves created. They can afford it, and then some.
Without a recommendations algorithm, users would have to actively search for content of interest. Specific message boards / pages / groups / Subreddits / whatever-you-want-to-call-them that specialised in "radical" content would be easy enough for automated searches to find and flag for human moderator review, if they're easy enough for regular users to find. With an appropriately scaled and resourced review team, both "objectionable" (FSVO "objectionable") or outright per-country/state illegal content would be more rapidly found. Disinformation is harder to tackle, but the framework for tackling it can only be established once the aforementioned review infrastructure is present.
None of this seems to be the focus of the lawsuit; they seem to be trying to argue over a legal distinction between these things being "a product" or not. That seems pretty difficult. Perhaps the idea of being "a product" is legally "proved" by the presence of a recommendations engine which implies immediate, albeit computer-generated corporate content bias pushed at individual users? Somehow, though, I doubt it...
As the bloatware features creep in, the hooks into other MS services trickle down, bullshit fluff like that truly puerile new animation when you set up a new account happen (followed by the utterly jarring experience of being thrown into the totally different *actual* GitHub UI) and now updates which break the service and DR that doesn't work...
...yeah, Microsoft.
Oh look! GitLab's right there.
Correia said he had compared the investigation process both with and without Copilot. "I found that with each step, it was roughly between 20 and 50 percent faster for Copilot to do it, given that you do need to go into multiple portals and log in there and wait for everything to load," he said.
Ah, so instead of logging into "multiple portals", CoPilot has been given access to those portals and is logging in for you.
I'm sure CoPilot can be trusted with your administrative portal access credentials. Because, Security.
Untrue. The source article is bad; the author doesn't appear to understand how browsers work or are built.
Chrome and Safari have completely different JavaScript execution engines (V8 vs JavaScriptCore). That's part of why they've been trying to trade blows on performance all this time. Chrome on macOS historically had bad GPU acceleration support too, so even though the WebKit-based engines might assess the document markup at the same rate, the *painting* rate could wildly vary.
Just because Chrome might be doing OK on that front today does *not* mean that Google can be trusted to maintain or not otherwise break that tomorrow.
Moreover, Firefox still exists and plenty of people use it.
(Edited to add that "conveniently not applicable to any non-Chromium browser" is essentially all but gibberish and certainly nonsense; it tests various automated operations using a collection of popular JS frameworks just as it always has and, when I ran 3.0 a few minutes ago, gave Firefox 22.6 and Safari 22.1 - so Firefox "won").
I am much more often astonished at the thorough and on-point replies I get
OK, but given you claim this:
ChatGPT is an excellent starting point for research and ideas
...then you have no idea if the replies are on-point. You cannot possibly. By your own statement, you're using it to start research, so you don't know what's right or wrong, and ChatGPT can lie to you and have absolutely no idea it's doing it. That's because it doesn't know what true or false or right or wrong or anything else means. It's just a clever, very convincing pattern matcher which does a remarkable job at convincing our monkey-brains that it's intelligent just because of the truly mind-boggling amount of data it's using for that stats matching. But there's no intelligence there.
We know this isn't how human intelligence works for many reasons - first and foremost, because humans can't possibly ever read, assimilate and remember the literal billions of documents that ChatGPT uses for training.
But ChatGPT is real. Perfect, no way. But very, very real.
"Real"? Well, I guess, the product does exist, so it is real, yes?! But if you mean useful for starting research - clearly, no. It's just very confidently incorrect and if you don't know the subject yourself, there is no possible way you can have any idea without cross-checking each and every thing it says manually which would take longer than a Google search or Wikipedia from the get-go.
Ironically, that's probably where most people go to cross-check ChatGPT's output, if they bother to do so. Why? Because they know it's more accurate. But don't take my word for it! Here's what OpenAI, the makers of ChatGPT, say about their own product:
https://help.openai.com/en/articles/8313428-does-chatgpt-tell-the-truth
"Does ChatGPT tell the truth?"...It might sound right but be wrong... ...It doesn’t know everything... ...No access to tools like calculators or the internet (mostly)...
When the vendor itself has to cover its ass with disclaimers about the inaccuracy of its product, then - given how reluctant large corporations are to ever admit any kind of flaw whatsoever in their products - you know without a doubt that things must be really bad.
Well, if you have a lot of content you want to search for, an AI chatbot customized on your data could come in handy. Information stored in txt, .pdf, .doc/.docx and .xml formats can be processed.
You mean like Spotlight (OS X, 2005, also running on iPhones under iPhone OS 3, 2009), or Windows Search AKA Instant Search in Vista, 2007, or doubtless countless earlier examples of third party content index or equivalent search systems across those platforms and many others? Or the systems that even start to use dramatically more efficient, domain-optimised on-device ML models for things like object detection in photographs and videos?
I mean sure, they don't need an RTX 30xx and gigabytes of RAM - it'll even run on an old phone, for example, and they handle a dramatically wider and extensible set of file formats - but the results are guaranteed accurate, instead of risking hallucinations. And where's the fun in that?!
/s
If the base expectation goes from 8GB to 16GB, then what people did in 8GB today will take 16GB tomorrow.
There won't be any more you can get out of the more powerful hardware after a year or two; it'll just run two or three more layers of bloated and buggy abstractions in order to do exactly the same kinds of tasks it's been doing for the last 10 or 20 years.
The natural reaction to this would be to ask why two/multi-factor authentication didn't prevent this from taking place.
Utterly wrong.
The natural reaction to this would be to ask why the account wasn't locked after a small number of wrong password entries along with an intentional few seconds of sleep time for every wrong password entry, to make brute force attempts impractical even without auto-lock.
THAT is the first line of defence against hijacking. 2FA is a secondary line but both must always be present.
While the cryptobros are stumbling around with JavaScript and insufficiently secured NPM packages...
• https://forums.theregister.com/forum/all/2023/12/16/ledger_crypto_conect_kit/
...the "hackers" are crafting cross-platform, anonymised, robust solutions in Go. Of course, people will keep telling me our industry does not have a crisis of competence.
...and I really want competitor hardware to Apple Silicon. I want to get get off this hyper-expensive but rotting platform with its insane and ever-escalating bug count, onto something that's also very buggy but at least is also cheap!
However - I don't understand why all these years after the debut of the M1 systems, I'm meant to be impressed by a "flagship" CPU+GPU combination which just about matches an M2 Pro. That's not Apple's flagship at all; that would be the M2 Max (the M2 Ultra could be considered too, tho is a bit of a kludge with some curious performance issues arising here and there). What's more, all of this happens at the end of the M2 cycle, with M3 expected colloquially "any minute now".
Not surprised Microsoft are itching to work with ARM-based competitors just as soon as their IMHO rather silly exclusivity tie-up with Qualcomm ends.
That's true and I thought the same thing, but I don't think it's relevant to the investigation (I'm pretty sure journalists are just ad-libbing those kinda stats for sensationalism). The investigation is, I think, not about whether Tesla's self-named "Full Self Driving" [*] - which is not full self-driving - package is better or worse than driver assistance packages from other companies, they're investigating whether not Tesla's portrayal of its technology has been misleading to customers.
I mean, obviously, it's a total mystery. I can't imagine why they'd think something called Full Self-Driving that isn't full self-driving could be misleading... This court case is going to require some true big-brain thinking.
[*] Autopilot marketing and claims might be investigated rather than, or in addition to, FSD claims. I'm not sure. Autopilot would be harder to prove IMHO.
I'm an angry, resentful luddite and I say no to this silly shit, period. I'll be driving a gasoline vehicle for the rest of my lifetime.
I mean, yes to the first bit - just let me plug in and wave a contactless credit or debit card at the reader, just like at a petrol pump. Job done. Totally agree that all the data collection bullshit is exactly that - bullshit.
Trouble is, you then just go right off the rails with major, huge inaccuracies...
Electric vehicles are impractical. They don't have the power, speed or acceleration that I want.
Given that even a Telsa mom-and-pop-mobile has been shown to have more power, more speed and more acceleration than a supercar, I have to conclude that you're full of it here. So let's put numbers to it. Tell us your current car, so we can get power, top speed and acceleration. I presume you're driving on racing tracks for fun a lot, though, because otherwise "top speed" is a bit pointless given 70mph limits on the fastest roads in the UK.
They are expensive to purchase
That is true. While the prices are dropping, it's going to be a very long time (if ever, since the industry sees an opportunity to price gouge for increased margin) that they'll drop to ICE prices spec-for-spec.
and to service
That is absolutely untrue, and is one of the principle reasons so many commercial outfits are changing their van fleet over to electric. Electric cars are dramatically simpler than ICE and have a correspondingly lower service cost.
and batteries have a finite life
They do, but EVs don't have a clutch, gearbox or combusion engine, and your clutch, gearbox and engine all have very much a finite life and cost a great deal to service year-on-year and even more to replace. I think you'll also find that the "finite life" of a battery for a modern (say, post-2020's design date) EV even in heavy duty commercial use is order-of-a-few-decades, which would be a good lifespan for any personal vehicle with any powertrain.
The Leaf, while being a useful pioneer and still a good car, caused a lot of the misconceptions here as it basically has no battery management at all to speak of - no active cooling or heating, very little lifespan management as far as I can tell. Very few other EVs, even "budget" models, have such limitations.
Teslas have been around easily long enough for numerous owners to report 100,000 mile (one hundred thousand mile) updates for batteries and the TL;DR is it looks like the overwhelming majority, without doing anything special, of late-2010s Tesla owners expect their battery to retain useful range for around 500,000 miles - half a million miles! - you want to try and tell me that ICEs are good for half a million miles, now?! (Typical report is of a ~10% range loss at the 100,000 mile point with rate of loss reducing thereafter).
Also, now that charging stations are neither plentiful nor free, it's laughable how people have to structure their whole lives around charging these vehicles.
Surely you're not that blinded by your combustion engine life? Most people visit a charging station rarely; some people, never. The idea that EV drivers "structure their whole lives" around charging their cars is absolutely absurd, to the point of laughing out loud that you'd state such a thing. In your ICE car, you must drive to a petrol station whenever it's running low on fuel - you've absolutely no choice in that - yet I'd never state that ICE vehicle owners structure their whole lives around going to petrol stations.
Meanwhile, the overwhelming majority of trips are far, far shorter than the EV's maximum range, with no charging required at all. At home, the EV is topped up overnight. With an EV, you have the option of never visiting the equivalent of a petrol station, ever should you choose. You can even generate your own fuel via wind generators or solar on your own home. None of that is possible with ICE vehicles.
This automated, induction charging sounds good in theory, but good luck with that.
Oh for sure, it's total nonsense. People have looked at this before numerous times but it's just wildly inefficient and causes all sorts of unacceptable risks due to the magnetic fields involved. In particular, got a pacemaker? Ah, shit...