* Posts by JacobZ

227 publicly visible posts • joined 26 Nov 2014

Page:

Bezos-backed Unconventional AI aims to make datacenter power problems go away

JacobZ

Re: Twaddle

One pedantic point: LLMs are not trying to emulate the analog processes of brains in order to reason like brains. They are merely trying to emulate the outputs of brains in the belief that if you scale that large enough, reasoning - or intelligence - will somehow magically appear. Or as linguists might say, if you imitate surface structure well enough, deep structure will magically emerge.

In other words, it's Cargo Cult Computing.

Or if you prefer a cruder analogy, it's like shoving sh*t up a cow's arse and expecting to get grass out of its mouth.

Cloudflare suffers second outage in as many months during routine maintenance

JacobZ
Headmaster

Re: I couldn't give a.....tinker's damn!

El Reg comment threads are antisocial media.

Campbell's CISO canned after lawsuit alleges hour-long rant against staff and customers

JacobZ

Bio-engineered meat insanity

Just for the record, bio-engineered meat costs around $60/kg. Real chicken costs less than $2/kg wholesale, probably even less in the industrial quantities Campbell's buys it in. It would be absolute insanity for them to use it in soup.

Anthropic is at the heart of the latest billion-dollar circular AI investment bonanza

JacobZ

Re: Hot air (and water)

The limiting characteristic of a datacenter, the one that is reasonably stable over time, is how much power it is supplied with and can distribute to its racks for compute and for cooling. The amount of compute, however you choose to measure it, is going to vary over the life of the datacenter, for example if you upgrade the GPUs (megaflops) or change the model (tokens).

So power capacity is both the only thing that is stable and the only thing that is reasonably predictable from the outset.

JacobZ

Stranded assets

The technical term is "stranded assets". And the people holding the bag are the private capital firms who accepted the GPUs in the bit barns and/or the Big Tech leases that "promised" to use those bit barns as collateral for their loans. And possibly the traditional investment firms who lent money to the private capital firms.

The whole thing is the absolute classic definition of a bubble. I suspect the people investing in it already know this, much like the 2008 housing bubble, and the game now is simply to not be the one holding the bag when the music stops.

Commodity memory prices set to double as fabs pivot to AI market

JacobZ

Just in time...

...for the AI bubble to pop, and the chipmakers to have to switch back again.

OpenAI’s viability called into question by reported inference spending with Microsoft

JacobZ
Facepalm

Microsoft knows best

Microsoft knows exactly what OpenAI's revenues are, because it gets a cut.

And it knows what OpenAI's losses are, because it has to report its share in its own quarterly reports.

And of course it knows how much (little) it is paying OpenAI for reselling its models on Azure (bear in mind that under 2% of Microsoft 365 users are licensing Copilot).

And knowing all that... Microsoft is partnering with Anthropic (https://www.cnbc.com/2025/09/24/microsoft-adds-anthropic-model-to-microsoft-365-copilot.html)

Big Tech's control freak era is breaking itself apart

JacobZ

Re: As irrational as it sounds...

"Experience is that thing you get right after you could have used it."

Agents of misfortune: The world isn't ready for autonomous software

JacobZ

Finally...

...a meaningful comparison between AI and Uber:

"Don't waste time asking for permission. Establish market dominance first, then ask forgiveness."

By the way, is it a requirement to be an AI leader that you throw out blatant lies to promote your product, or is it just a coincidence that OpenAI, Anthropic, and Perplexity all do that?

Famed software engineer DJB tries Fil-C… and likes what he sees

JacobZ

Re: Type checking and compatibility

FWIW, and that's probably not much, I prefer "bool flag := <bool expression>". It is a useful redundancy that ensures that the (potentially hard to read) expression is of the type that the author and later reader think it should be. While this is not a big deal for bool, it can be a big deal for numeric types or for references (how many C bugs occur because something is a pointer to a pointer, rather than just a pointer?)

Also FWIW my own language project [actually a pre-processor to Go since they won't add it to the language] goes even further with compile-time checking. It allows the programmer to define named types, much like any language, and then also "compound types", or as I call them, dimensions. For example, in C-ish pseudo-code it might look something like...

type Meters double;

type Seconds double;

type Velocity (Meters/Seconds);

And then if you have m, a variable of type Meters, s of type seconds, and v or type velocity, you can write:

v = m/s;

or

m = v*s;

but not

v = m*s; //wrong dimensions, failed by dimension checker even though underlying types are all double

The dimensions will be checked at compile time, leaving runtime code as efficient as if the dimension checker never existed.

Of course, this is a relatively trivial example. The value becomes more apparent the more complex an expression becomes, and when calculations are chained together.

JacobZ

Re: Not quite ...

Is the hammer responsible for bending nails?

Tony Hoare certainly thought so, and he knew a thing or two about writing languages: "The author of a language is responsible for errors commonly made by its programmers" (wording may not be exact, intention is.)

When I was learning Rust and I got to the sentence that began "A common mistake made by Rust programmers..." I threw the book against the wall. (Not my only issue with Rust).

Look at this way: if you had two hammers, and in the hands of the same skilled carpenter, one of them drove straight every time, and the other one bent one nail in ten, would you say that the hammer was responsible? Because I sure would.

Microsoft just revealed that OpenAI lost more than $11.5B last quarter

JacobZ

10% of profits is not easily shaken off, even for MS

A loss of $3.1B out of a gross profit of $30.8B is 10%, and that is not "easily absorbed" even by Microsoft. If you had told investors mid-quarter "hey, our profits are going to be 10% less than they would have been without OpenAI", do you think they would have said "this is fine"?

These are seriously big numbers, and they are getting bigger, not smaller.

AI boffins teach office supplies to predict your next move

JacobZ

Ceiling cats

"The system combines ceiling-mounted cameras "

Make that ceiling-mounted cats and objects that throw themselves off the counter, and I'm in.

JacobZ

Re: Could be useful if

I feel like mine do that already. I own three hammers, and I'll be buggered if I can find one when I need one.

OpenAI's ChatGPT is so popular that almost no one will pay for it

JacobZ

Re: Nothing like statistics and creative maths to make the world go around !!!

I don't know why somebody downvoted you. That's *literally* how OpenAI etc. are calculating their Annual Recurring Revenue (ARR). They take their most profitable recent month and multiply by 12.

Actually, given the lack of transparency, it might not even be a month. It's possible they took their best day and multiplied by 365.

JacobZ

Re: One day...

Prescient.

In the classical enshittification model as practiced e.g. by Facebook and Amazon, phase 3 begins when having locked in the business users, they start screwing them over too.

For a preview of what that will be like in AI, look at Anthropic. Anysphere, the maker of Cursor AI, reportedly pays 100% of its revenue to Anthropic for use of its Claude models. Meanwhile, Anthropic is developing Claude Code, it's own AI coding assistant. As soon as they judge it good enough, they will increase the prices they charge Anysphere, tipping them into unsustainable losses, and pick up their user base on the premise that it's easier to move to another Claude-based tool that start over somewhere else entirely.*

*I have no idea whether it really is easier, but that's how it will be sold to the suits.

Some like it bot! ChatGPT promises AI-rotica is coming for verified adults

JacobZ

What does ChatGPT say?

I asked ChatGPT about this, and it adamantly and repeatedly denied that OpenAI would ever do such a thing:

"I haven’t seen any press release or public statement from Sam Altman, OpenAI’s CEO, where he explicitly says OpenAI will create porn chatbots. It’s possible that there’s been some confusion or misinterpretation, as Sam Altman and OpenAI have generally been focused on ethical AI development and ensuring that their models are used for positive societal impact.

OpenAI has been quite clear in its guidelines and efforts to restrict harmful, illegal, or unethical uses of its AI, including adult content. They’ve taken active steps to prevent the misuse of their technology in various harmful areas, including by implementing content filters and moderation features to ensure users don’t engage with inappropriate or damaging material."

It also said that "If such a claim were true, it would likely spark significant ethical debates, as creating or allowing AI to be used in adult or explicit contexts raises major concerns about consent, safety, and harm."

Then I pointed it at this story on El Reg, and it admitted that "Yes, OpenAI (via its CEO) has publicly confirmed an intention to allow erotic/mature content for verified adults starting December 2025, under new age-gated arrangements."

I now feel very dirty.

Not because of the pornbots, but because I used ChatGPT.

18 zettaFLOPS of new AI compute coming online from Oracle late next year

JacobZ

Water, water everywhere

Yep. And on an even more basic level, imagine if that much water were supplied to people who don't have clean water, or to agriculture for small farms. According to a study by the Lawrence Berkeley National Lab, "in 2023, U.S. data centers consumed 17 billion gallons (64 billion liters) of water directly through cooling, and projects that by 2028, those figures could double — or even quadruple. The same report estimated that in 2023, U.S. data centers consumed an additional 211 billion gallons (800 billion liters) of water indirectly through the electricity that powers them."

Fast forward than number to 2025, and projections for 2026...

Qualcomm solders Arduino to its edge AI ambitions, debuts Raspberry Pi rival

JacobZ

Re: An obvious choice

And a lot more profitable to start with one that's hugely popular.

Nadella hands Microsoft money machine off to new commercial CEO so he can visioneer the future

JacobZ

Re: Don't Replace The Workers With AI... Flip The Script.

Now that is some out-of-the-box thinking. It's a whole new paradigm.

JetBrains backs open AI coding standard that could gnaw at VS Code dominance

JacobZ

Embrace, Extend, Extinguish

Sure, Microsoft will adopt ACP.

Then it will add some unique proprietary extensions.

And then it will drop or modify some key features so that your agents will only run with VS Code.

And anybody not paying sufficient attention will be locked into Microsoft again.

AI has had zero effect on jobs so far, says Yale study

JacobZ

Things that make me go "hmmm"

If AI is producing such tremendous efficiency gains for companies, why can't any of the AI services - OpenAI, Anthrophic, Anysphere (Cursor) - charge enough to even cover the cost of a query?

If these claims were true, you'd think that would be something really valuable that companies would pay a lot of money for.

JacobZ

Re: Do you trust Yale or Stanford more?

Yale. Stanford is conflating correlation with causation. And with stupidity.

SHORT VERSION: Big Tech has been cutting jobs for almost three years now. This long predates any possible effect from AI. AI adoption is just the latest convenient excuse for cost reductions.

LONG VERSION.

What is actually going on with IT jobs is this:

- Up until a handful of years ago, Big Tech firms (you know who the usual suspects are) were wildly over-hiring in anticipation of growth needs, partly to "fill the bench" with trained, fully on-boarded people ready to go, and partly to deny that talent to their competitors. Basically, like Chelsea FC.

- Then a cold dose of reality sets in. They realized that they were paying lots of people lots of money to twiddle their thumbs, learn Rust, read El Reg, and other time-filling. But nobody wanted to be the first to admit a mistake* and start firing people. (Or "rightsizing" or "balancing the workforce" or "reskilling" or whatever.)

- And then along comes Musk who buys Twitter and starts slashing jobs in the name of efficiency -- not because he has spotted overstaffing and has the courage to address it, but because he has no idea what people in Twitter do, and in his mind, anything he doesn't understand can't be important. Of course, his cuts are entirely capricious and in many cases are essential people, but they get the payroll down. (Remind anybody of anything?). This is right around Nov 2022. (By an odd coincidence, that is also when ChatGPT was first publicly released, but years before it would be credible as a scapegoat.)

- The other tech bros sit up and pay attention, and suddenly every Big Tech "realizes" that it too can make "efficiency savings", and they also start cutting jobs. Not only are they letting experienced and inexperienced people go, they are also rescinded job offers to grads and cut way back on making new offers to the Class of '23 (as the Americans say). IT unemployment starts to rise. This is well-documented online if anybody wants to verify, both in news stories and in financial notes from the likes of JP Morgan.

- Fast forward a couple of years. The universities have still been pumping out IT grads (because people can't switch tracks on a dime), there are few to no entry level jobs to be had, and the newly minted grads are competing with the laid off people who have real experience on their resumes. No surprise, it's very hard for a lot of them to get hired.

- Obviously, Big Tech can't keep firing people in the name of "efficiency" forever. But then along comes AI, and the usual carnival barkers are shouting that AI will devastate employment *even though there is absolutely no evidence for this* and *lots of evidence that efficiency gains are marginal at best, especially in programming jobs* and *even if it does happen it is years away at least*. Never mind reality, here is another perfect excuse for companies to cut payroll. So now all these firms are continuing to cut but now they are attributing it to "AI automation" for which they have not a sliver of evidence.

- (In truth, there is also some amount of companies firing people and attempting to replace them with AI, but mainly because the managers have no idea what the workers do, and the remaining workers are taking up the slack, and when things start to fall apart they will have to walk it back. But that is secondary, especially among Big Tech.)

- And then along comes Stanford, probably with an outcome already in mind, one that will grab far more headlines than Yale's "null result"**. They note that (a) IT employment is down, and (b) lots of firms are *saying* that they cut jobs because of AI, and (c) the downward trend started right around Nov 2022, and they draw a conclusion that is simple, obvious, and wrong (to paraphrase H.L. Mencken), and point at ChatGPT. This is, of course, complete nonsense: although it was released at the same time as the Twitterpocalypse, It would be at least two years before job replacement could be remotely plausibly attributed to ChatGPT.

----

*Compare also: wildly overspending on AI infrastructure. Not wanting to be first to admit a mistake is a fixed feature of recent tech fads. There is a powerful herd mentality (I feel like there is a "nerd mentality" joke there somewhere?)

**Sadly this happens a lot with all kinds of studies, including scientific ones. Null results are dull, and go in the bottom drawer.

Oracle will have to borrow at least $25B a year to fund AI fantasy, says analyst

JacobZ

These numbers are fantasy

Oracle does not have the hundreds of billions required to build all those data centers.

OpenAI does not have $300B to give Oracle. It doesn't even have $30B. Unless two miracles happen, it won't even have the $20B SoftBank has promised it (and which SoftBank doesn't have either.)

And no, NVidia is not going to give OpenAI $100B to build datacenters. It's going to give it $10B at a time, each time it brings a datacenter online. And those datacenters will cost $30B to $50B each.

All of these numbers are completely imaginary. None of this money exists. None of these "commitments" can possibly be fulfilled.

Larry Ellison might as well be sitting on his next quarterly call with his pinkie finger to his mouth demanding "One trillion dollars!".

JacobZ

Oracle is cashflow negative

While this is true, the outflows for AI capital spending are more, leaving Oracle cash flow negative to the tune of about $6B last year. Two more quarters of that would complete deplete its cash reserves.

And it's going to get even worse as it starts building datacenters that OpenAI has not - and might not - pay for.

Vibe coding platform Replit's latest update is infuriating customers with surprise cost overruns

JacobZ

The next turn of the enshittification screw

On a related note, Anthropic increased what it charges Cursor, so expect that to go up in price too.

We're going to see a lot more of this as these GenAI companies try to increase their prices to something closer to break even. And we're going to see a lot of usage fall away as customers decide that the value is not there when they have to pay a realistic price.

This is the future of vibe coding: higher costs, worse results.

OpenAI reportedly on the hook for $300B Oracle Cloud bill

JacobZ

Re: Why Oracle?

The latter. MS has been "consciously decoupling" from OpenAI for a while now, ramping down its willingness to give them server time at cost, and canceling some DC contracts that were speculated to be for OpenAI's growth in demand.

Whether this means anything I don't know, but it is a good time to remind ourselves that MS has completely open access to all of OpenAI's technology as well as its books.

JacobZ

Re: But the empire he has no clothes

True, but unlike other bubbles (did anybody mention the South Sea Bubble, which I believe is where the term "bubble" was first used?), when the Dutch Tulip bubble burst, we still had some lovely tulips.

When this one bursts, we're going to have a bunch of empty datacenters and rapidly depreciating GPUs

The Unix Epochalypse might be sooner than you think

JacobZ

Re: Attitude problem

If you remember the year zero, you weren't really there.

JacobZ

Re: It's not such a big problem out of museums, I suppose.

"Confidently wrong" is never a good look.

In the second half of the 1990s, I was a product manager on Oracle Forms, a 4GL supporting tens of thousands of end-user seats (maybe even six figures, Marketing closely guarded the number...) Our users were in just about every non-real time field imaginable*. Oracle's Applications suite at the time was also built on top of Oracle Forms, and included things like supply chain, Sales and Marketing**, and HR, which very much "dealt with a day and a year".

Before that I worked in real-time systems, including supporting stuff like the developers of a missile test monitoring system***. And one of the interesting things is that they are often composed of multiple parts from multiple suppliers, and even if the top layer of software is calculating everything correctly, there can be a component buried deep in the stack that is not - and sometimes doing it wrong in hardware or firmware.

Compare also the GPS week rollover bug, which in 2019 caused problems for Honeywell's flight management and navigation software as well as certain atmospheric telemetry instruments, resulting in launch postponements.

* There were even some that acted as non-real time GUI configurators for a real time system such as industrial/commerical HVAC

** I recall the first time I was summoned to a meeting about the new "S&M application". Imagine my disappointment.

*** "...and then I thought, what if it's a bug in the compiler?"

Crypto-crasher Do Kwon admits guilt over failed not-so-stablecoin that erased $41 billion

JacobZ

erased $41 billion?

Wait, where did the $41 billion come from?

Or is this a case of $41B in imaginary money being created out of thin air by launching a cryptocurrency, and then the same $41B vanishing again when it all went pear-shaped?

I have a suspicion that the amount of *actual* money lost was a tiny fraction of that.

Spy school dropout: GCHQ intern jailed for swiping classified data

JacobZ

Re: re: How

Having Class A child porn on your personal computer seems like a pretty substantial risk for blackmail, no?

OpenAI model modifies shutdown script in apparent sabotage effort

JacobZ

Re: Bollocks

Ironically, the LLM company most frequently guilty of anthropomorphizing its models is... Anthropic.

JacobZ

Desperate cry for attention

I'll take Things That Never Happened for $1000, Ken.

I suppose it's possible that with sufficient nudging and the right series of queries somebody managed to get an LLM to do something remotely resembling what is described here. But anybody with the slightest knowledge of what LLMs are and what they do knows this is complete bollocks, just like every similar claim has turned out to be.

Artist formerly known as Indian Business Machines pledges $150B for US ops, R&D

JacobZ

Shambling zombie corpse

So the most recent great innovation the PR department can name is Apollo?

That's not a good look.

How to stay on Windows 10 instead of installing Linux

JacobZ
Coat

Re: Same here :)

So they're putting A1 on their chips?

Blue Shield says it shared health info on up to 4.7M patients with Google Ads

JacobZ

Blue Shield should be sued into oblivion

This is a massive HIPAA violation. Blue Shield customers should sue it into oblivion.

And then move on to all the other US health care "providers".

I put "providers" in scare quotes, because the entire business model of health insurance is not to provide insurance, but to deny it.

White House confirms 245% tariff on some Chinese imports not a typo

JacobZ
Facepalm

Idiots

Even if it was a typo, this gang of idiots would double-down on it rather than admit a mistake.

Microsoft: Why not let our Copilot fly your computer?

JacobZ

Solving the wrong problem, badly

In addition to all the other flaws mentioned upthread, this is entirely the wrong way to do automation. A much better approach would be to provide APIs to the underlying functionality that can be directly invoked by an automation agent - preferably an agent that does not purport to be AI - and ideally, upon which the human-centric UI is also built.

Driving the UI directly is the software equivalent of creating a self-driving car by taking an ordinary car and replacing the driver with a humanoid robot.

Microsoft walking away from datacenter leases (probably) isn't a sign the AI bubble is bursting

JacobZ

Re: "gigawatts" capacity ?

That's because (a) power (together with cooling) is the limiting factor in a datacenter, and (2) how much compute will be available is not readily predicted.

The computing capacity is neither a fixed quantity nor even a readily forecastable one. If you start a datacenter today, you don't really know what chips will be available when it finally comes online. And even once it does, the CPUs, GPUs, and whatever other PUs are in there are going to constantly evolve. They will also change depending on what mix of regular compute, AI compute, storage, etc. the datacenter is running, something that will also evolve over time.

So for all those reasons "compute capacity" is not a stable enough number to be useful.

Judge orders Feds rehire workers falsely fired for lousy performance

JacobZ

Or as Nixon attempted to claim, "it can't be illegal if the president does it". History disagreed

Microsoft goes native with Copilot. Again

JacobZ

Re: users can ask questions about their PC, such as :

Thread over.

Alphabet achieves first $100B annual profit ... and sees its shares sink

JacobZ

Wot, no AI?

"Alphabet attributed the growth to enthusiasm for Google Search and YouTube ads" must be one of the most marketing sentences ever written. If Mr. Claburn managed to write that while maintaining a straight face, he is a stronger man than I.

However, I am genuinely surprised that Alphabet didn't attribute the growth to enthusiasm for mediocre LLMs, aka slop machines.

Windows 11 24H2 can run – sort of – in 184MB

JacobZ

Feeling old

My first reaction was "wow, that seems like a lot of memory".

And then I remembered what century this is.

It's been 20 years since Oracle bought two software rivals, changing the market forever

JacobZ

Make it so

My recollection (possibly faulty, it was a long time ago and a lot of beer has flowed under the bridge since then) was that a couple of years before all this kicked off, Ellison confidently predicted that the ERP and CRM space was inevitably going to consolidate, and that when the dust settled, Oracle would be one of the big winners.

When the market steadfastly refused to go along with Ellison's opinion, he set about making it happen himself with a series of acquisitions of decidedly questionable business sense. Oracle's actions triggered SAP to follow suit, thus "proving" Ellison's foresight.

BTW, in my experience one of the unalterable laws of software is that any project, product, or portfolio named Fusion is a hodgepodge of incompatible technologies that get along like a sack of cats and that Marketing is trying to throw a blanket over.

The NPU: Neural processing unit or needless pricey upsell?

JacobZ
WTF?

Under a rock

Apparently I've been living under a rock for a year because I have never heard of these things. And I thought I was reasonably current with tech.

Or perhaps I did hear about them, promptly filed them under Next Overhyped Thing, and completely ignored them since?

IBM's mainframe bubble bursts and growth stalls

JacobZ
FAIL

Prime

The most recent price increase for Prime combined with the lack of good, findable content caused me to drop it, and I'm far from alone.

The fact that they keep larding the bundle with additional features that have no value to me but presumably are factored into the price is not helping.

If it were possible to buy just the delivery service at a reasonable price I might go for that because I still occasionally order stuff I can't find elsewhere.

Smart homes may be a bright idea, just not for the dim bulbs who live in 'em

JacobZ
FAIL

Call me old...

...but exactly what problem are "smart homes" solving?

In my <number> years on this planet, I cannot remember thinking of any fixture of fitting in my house of which I have ever thought, "if only I could turn on/turn off/adjust $X without getting up from the sofa."

The only possible exception from that is the "oh, shit, did I close the garage door?" moment when leaving on a long trip, and even that only requires remote control of that one piece of equipment, not synchronized color-changing light bulbs.

IBM quietly axing thousands of jobs, source says

JacobZ

Weird...

...how criteria like being too expensive, not a "digital native", too old to be flexible and understand emerging technologies, etc. never applies to C level positions.

I'll bet there are tons of people in India who could do their jobs just as well for far less money. They probably bought their MBAs from the same American universities too.

Ellison declares Oracle all-in on AI mass surveillance, says it'll keep everyone in line

JacobZ
FAIL

In Ellison's defense...

...

...

...

Nope, I've got nothing.

Page: