* Posts by EricM

481 publicly visible posts • joined 2 Sep 2016

Page:

Anthropic writes 23,000-word 'constitution' for Claude, suggests it may have feelings

EricM Silver badge

Not really ...

>AI models like Claude need to understand why [...]

> we need to explain this to them rather than [...]

> [...] help Claude understand its situation, [...]

> [...] a genuinely novel kind of entity in the world [...]

> [...] one heuristic Claude can use is to imagine how a thoughtful senior Anthropic employee[...]

This document implies in many places that Claude is some kind of being. While many humans working with or talking to AIs develop that feeling, objectively it is not. An LLM AI model is a (large) bunch of numeric values representing the weights, that determine the execution path and finally the output of a software running on hardware.

An LLM does not "understand" text, it does not "know" or can "imagine" anything. An LLM generates text based on its model weights, a context and a prompt. If an LLM were sentient or would be able to "understand" explanations, things like hallucinations, [indirect] prompt injections or jailbreak prompts would not be possible and we would not discuss things like guard rails, model bias or lack of auditability.

In the end, this "constitution" thing is just marketing.

Palantir CEO claims AI will mean western economies won't need immigration

EricM Silver badge

Re: Immigration

> Healthcare and Social Care - AI will make little to no difference

I think you need to take into account that the labor market will crash, once AI is able to actually do what the AI salespeople are selling for 2 years now.

In case it actually starts working, AI will make sure that less people are able to afford healthcare or social care ... so, yes, there will probably be an indirect and unintended difference.

Same for hospitality. Most people need an actual job to be able to afford this.

IT/Tech/Engineering/Teaching:

These are - in the view of those CEOs - the Jobs AI should and will take, so they can let go their current, expensive work force.

And I don't think, those are crystal balls, this is simple wishful thinking out of motivations like short-sighted personal greed combined with the lack of imagination (or a lack of interest) of the social and long-term economic effects.

Meta retreats from metaverse after virtual reality check

EricM Silver badge

I think you misread the OP ...

Trump says Americans shouldn't 'pick up the tab' for AI datacenter grid upgrades

EricM Silver badge

Re: Consumers always pay

True, but in this case the AI customer that keeps using an image generator will pay more, instead of mom&pop running their fridge also paying to keep the prices low said AI customer.

So raising the cost of powering AI instead of raising electricity prices in general would in fact align the cost increase better to cause and effect.

However, I don't think that firewalling electricity cost increases due to AI will *somehow* happen, e.g. without enforced government intervention.

And I fully do NOT expect the current U.S admin to intervene in a way that could be painted as "anti-business"...

EricM Silver badge
Facepalm

"I never want Americans to pay higher Electricity bills because of Data Centers."

Free markets and the very basic concept of supply and demand be damned?

Once you introduce a new, very big consumption to the already strained electricity market, the prices will go up for all market participants.

Economy 101...

Yeah, I know, I shouldn't be surprised ...

Linus Torvalds tries vibe coding, world still intact somehow

EricM Silver badge

"as long as it isn't for something important"

Agree with that statement and I'd like to add "as long as you don't care too much how it looks and functions".

This kind of echoes the first post in this thread.

You approach what you intended 50-70% on the first try, reach maybe 80% after a few edits/refinements, but achieving 100% does not really work out.

Instead, images generated by AI tend to get uncanny with repeated generation and more complex prompts, text/code becomes more unreadable and more confused with every edit, bugfix and every change.

For content generation, session context growing beyond the context window capability pretty fast is still a very basic, unsolved problem.

So iteratively approaching a desired, detailed state of the final product (image, text, code) with multiple generations, refined, more complex prompts or a lengthy change this/change that session does not really work well with today's generative AI.

So if you have a very detailed idea on what your code should look and do, vibe coding is probably not the right tool ...

Cloudflare CEO threatens to make the Winter Olympics a political football after Italy slugs it with a fine

EricM Silver badge

Re: the ugly american face of SaaS

We need EU alternatives for everything. Yesterday.

Agree. And all American Big Tech be damned, etc.

But just one thought to that:

Should a EU alternative service happily implement the requested over-blocking (IP + DNS resolver) just based on an accusation or suspicion of piracy with a timing requirement of 30 mins but without a 30 mins contest mechanism just on the request of a single sports league (football)- on a worldwide basis?

If yes: When they are done blocking all addresses/ranges, not much of the Internet might be left, with the notable exception of the sites of the big copyright holders...

Even though the form and tone of Cloudflare's complaint is an extremely stupid one:

Piracy is a problem, but overreaching power from private rights holders without meaningful legal oversight and also is a problem.

Palo Alto Networks security-intel boss calls AI agents 2026's biggest insider threat

EricM Silver badge

Re: The pope is a catholic and other known facts !!!

A CxO might be better able to understand their choice of "pocketing the profits" vs. "living through security hell on so many levels" as laid out by the interview than - admittedly - by reading one of my risk assessments.

The number of potential, easy to understand problem descriptions in this interview is impressive, all, while avoiding the standard AI-doom messages.

So I guess it might actually work.

EricM Silver badge

Re: The pope is a catholic and other known facts !!!

The article describes a IMHO sensible opinion by a knowledgeable person on the topic. Not sure why this should be a bad thing.

To us techies "in the trenches" these opinions might seem obvious, however, if people like us warn against AI, management often seems to assume that we just put out false alarms out of fear for our jobs.

So her well-described arguments may be taken more seriously in relevant management levels.

When the AI bubble pops, Nvidia becomes the most important software company overnight

EricM Silver badge

Article: "...but you can also do other things with GPUs"

True, but the world does not need that many GPUs to meet that "other" demand, compared to the crazy spending level of 2024/25 on AI-related projects.

Remember: Most of the fields in need of GPU power are more or less closely related to science, which means that in the U.S. of 2025 they look at much reduced funding.

So, not that many players will be pouring hundreds of billions into mechanical stress calculations, fluid dynamics, protein folding or weather prediction.

Still, new assets will find a use for sure, in science, in R&D.

At lower prices, in lower volumes. However, Nvidia will probably be fine ( regarding their hardware sales, not necessarily regarding their circular investment deals with many AI companies ...)

On the other hand, used GPUs and AI accelerators, which are already burned through to an unknown extent in some existing AI datacenter , will probably rot on the shelves after the burst, unable to earn their interest.

You don't need Linux to run free and open source software

EricM Silver badge

As a hypervisor, we like Oracle's VirtualBox. As we have explained before, the only licensed part is the Extension Pack, and the hypervisor works fine without it. Avoid that, and you're safe.

Agree on VirtualBox, however, as with everything else from Oracle, keep a close eye on any future license changes/updates/"clarifications"/..., as those might very subtly change what can be considered "safe" use of their stuff and what might put you in the crosshairs of one of their licensing enforcement teams.

Spy turned startup CEO: 'The WannaCry of AI will happen'

EricM Silver badge

Re: ’Tis Perfectly Normal and Just Otherworldly Progress. And Enjoy the Fact IT is not for Losers ‽

> And such virtual attacker actions [...] guarantee successful [...]takeover [...]

No, they don't.

You just should not use AI for defense, just for offense.

Defense is still best served with enforcing architectural patterns like separation, segmentation, reduction of attack surfaces, overall simplicity of solutions, etc.

Merry Christmas, indeed :)

EricM Silver badge

That's the point: No amount of "AI" will protect you from a zero-day in you network or apps stack.

Attackers can tolerate a near 100% fail rate of any individual attack, their AI just needs to succeed 1 single time. Even an endless stream of failed attacks does not put the attacker at risk.

Defenders on the other hand not only need to succeed defending against 100,0% of attacks, they also must not block legitimate traffic. Every failed defense (over- or under-blocking) is a risk for the defender.

So, the uncomfortable truth is, that due to that asymmetry between attacking and defending, the emergence of AI primarily helps the attackers.

NASA tries savin' MAVEN as Mars probe loses contact with Earth

EricM Silver badge

Safe mode will change the orbit as it involves firing the thrusters

If that is true, MAVEN would be the first craft to employ that kind of safe mode.

AFAIK Safe mode typically affects only the _attitude_ of an affected probe (potentially by firing the thrusters, if reaction wheel are unavailable) to point high-gain antennas to earth in preparation of expected commands sent by engineers to download debug logs and the like, but stops all other activity.

Would love to hear more details about potential orbit changes caused by safe mode.

The future of long-term data storage is clear and will last 14 billion years

EricM Silver badge

Not necessarily.

Open Source the spec, including laser properties, wavelengths, encoding/positioning dependencies, etc., and refrain from designing in DRM enshittification: You are good to rebuild devices as needed.

Similar to our current ability to recreate tape devices from the 70s or floppy disk drives from the 80s.

This is just not done that often, as most media from that time is already gone.

Trump wants to turn it on again with 'Genesis Mission' for AI in science

EricM Silver badge

Writing from Europe, it will be interesting to explore the effects...

of weakening traditional science institutions massively while pouring even more silly amounts of money into AI/Science slop.

At least we will not be directly affected by the result most probably being "nope, doesn't work as intended...".

Chinese spies told Claude to break into about 30 critical orgs. Some attacks succeeded

EricM Silver badge

Confusing ...

A company that lets crims use weapons from its arsenal to attack the public tells said public that the fact that it weapons do not work reliably is a good thing?

Because not every attempt to misuse the weapon scored a kill?

Fortytwo's decentralized AI has the answer to life, the universe, and everything

EricM Silver badge

So the answer to "life, the universe, and everything" regarding AI now is: Don't only scam investors, also scam end users into paying for your energy and hardware bills?

Saying that this scheme addresses "a practical issue: the shortage of centralized computing resources." is just another way of saying that running AI inference requires much more CPU/GPU and power than traditional IT - which is a very basic issue, that makes AI impractical and too expensive for most tasks to be economically viable.

Shifting the burden of providing CPU/GPU and power to decentralized end users will not solve this underlying, expensive problem.

They literally try to cloak this problem by an S.E.P (https://hitchhikers.fandom.com/wiki/Somebody_Else%27s_Problem_Field)

YouTube's AI moderator pulls Windows 11 workaround videos, calls them dangerous

EricM Silver badge

Re: I only use Windows for work

In the context of this news: We should not wait until AI moderation also declares running Linux being dangerous and starts removing vids explaining how to do so.

Or until _someone_ pressures whichever video or web platform to configure their AI that way.

AI is making Google and Meta even stronger and richer

EricM Silver badge

I thought this was the actual reason for AI startups still getting "investments"..

The big cloud providers, including AWS (plus Nvidia) invest in AI shops, so they have the money to spend on AI Cloud Services and AI hardware.

Cory Doctorow and Ed Zitron have some very good write ups on that topic.

https://pluralistic.net/2025/10/29/worker-frightening-machines/#robots-stole-your-jerb-kinda

https://www.wheresyoured.at/costs/

The big question is: How big is the crater going to be, once this growing circular dependency hits the bump in the road, namely customers stopping to fall for the hype of AI wonders (like for example deterministic responses) coming "real soon now".

AI browsers face a security flaw as inevitable as death and taxes

EricM Silver badge

In 1990 this was a joke poking fun at DOS "security"...

I remember getting mails at the time along the lines of:

"Hi, I am a destructive worm designed for Unix systems. Please send a copy of this mail to everyone in your address book and then delete all your files."

And 35 years later computers commanded by the planets most advanced IT systems actually start to fall for this kind of "attack" ...

Come on ...

Chatbots parrot Putin's propaganda about the illegal invasion of Ukraine

EricM Silver badge

Agree. Neural nets are trained on Tokens (text, images, ... ). There is no concept of "facts" or "truth" in AI training. Just weights between tokens. All trained text is processed equally ...

If one trains (also) on disinformation created by the Russian government by scraping most of the Internet, the LLM will output that disinformation, it's just one more text that mentions a given topic. The disinformation might even be more optimized for AI consumption than standard content to create more leverage.

And disinformation regarding Ukraine is just a very obvious example.

As is Russia as creator of disinformation optimized for AI.

Sora makes slurfect deepfakes of celebs spewing racial epithets

EricM Silver badge

These are only foreshadows of things to come...

Celebs spewing racial slurs on video will not do any damage to the celeb, as everybody assumes it's fake. But only, because it's too obvious to be real.

Imagine, on the other hand, a video of yourself (let's assume you are not a celeb), where you seem to confess to a third person for example tax fraud, DUI or drug use/abuse. Bonus points if the video is made to appear as if it was filmed by a hidden cam (out of focus scene, rapid camera movements, bad sound, etc.)

Depending on your social and/or professional environment generated videos can become as damaging as the real thing. Kompromat on demand. Extremely cheap.

And as long as the judicial system in most countries continues to trust videos as evidence, it might even lead to convictions in RL.

Grounded jet engines take off again as datacenter generators

EricM Silver badge

I did, but neither the first nor this second post did show up that evening, "submits" resulted in timeouts.

I just was surprised to see these posts at all...

EricM Silver badge
FAIL

What is doing more damage to humanity?

Repurposed jet engines that heat the environment by releasing excessive CO2 for a power output to be fed into AI models?

Or the unreliable advice generated by that "AI" model, that is trained on all things they could scrape off the Internet?

Well, together those 2 will probably get the job done - which obviously is the destruction the environmental _and_ mental basis of humanity.

The $100B memory war: Inside the battle for AI's future

EricM Silver badge

Re: the real chokepoint isn't processing speed – it's memory bandwidth.

A much more relevant "chokepoint" with regard to AI IMHO is "meaning of data" and "interpretation of results"

People are so quick these days to "vectorize" data, feed it to an AI and ask for "reasoning" answers in free form language, that the input and output are both highly in need of interpretation.

The data, its processing, unintended ( and finally meaningless) coincidences in the data, over-interpretation of those coincidences or semantic variability of the natural language query can all introduce deviations in results that are not easy to prevent. Interpreting answers in the context of provided data (and the limits the data induces) is another can of worms.

All these problems require well educated manpower to analyze and solve.

Add to that the natural tendency of AI to make up things it has no information about, and this approach in general will at best show mixed results at high cost.

In my personal experience contextual and semantic problems are much harder to solve than the bare metal processing problems, as solving them scales much worse that the actual technology.

Larry Ellison's latest craze: Vectorizing all the customers

EricM Silver badge

Re: Bubble

It will burst soon, I guess.

> Big Red wanted to ask its "reasoning models" what products customers are likely to buy in the next six months.

Based on the Oracle comms I see, this works about as well as "other customer looking at this product also bought *this shiny new thingy*, one is used to see on Amazon and the like, or the ton of refrigerator ads you see for 3 months _after_ you bought a new refrigerator (and forgot to clear the cookies out of your browser).

So, they basically achieve the same, often pointless, outcome by different means, e.g. by "vectorizing customer data" with much higher investment and much higher energy cost for AI to process their vectors. And with much higher environmental impact.

I must admit that's a somehow cool approach and all that, but pretty pointless from an economic and outright dangerous from an environmental perspective.

AI startup Augment scraps 'unsustainable' pricing, users say new model is 10x worse

EricM Silver badge

Re: If you base your calculations for your "business"

Agree.

The companies probably did not intend to hurt their users, but this is the necessary effect of AI being _much_ more pricey to setup and run than their marketing tries to make you believe.

Every time you see a "$ XXX Billion investment in new AI infrastructure" headline, ask yourself, what part of that infrastructure (plus return on their "investment") you will be expect to pay for.

Either through taxes, because your government has been hooked by AI-services or by increased prices for products from companies who have fallen for AI - or, as in this case, directly as a consumer of AI coding tools.

Benioff retreats from idea of sending troops in to clean up San Francisco

EricM Silver badge

Re: Salesforce has to pay for “hundreds of off-duty law enforcement officers”

Yes, in other countries private security services also exist, but are typically not employing off-duty law enforcement personnel.

So police officers in the US seem to be paid so low, that many of them need a second (or third job).

Managers are throwing entry-level workers under the bus in race to adopt AI

EricM Silver badge

They will learn - just as we did, haven been taught "Structured Programming" with PASCAL, K&R C on modern OSs like Minix (and DOS) in the later 80s.

On the first Job I was expected to master FORTRAN, JCL and some pretty specific OSes like HP RTE on HP 3000 or CP-6 on DPS-8...

This also felt like stepping back 20 years through a time tunnel.

Microsoft lets bosses spot teams that are dodging Copilot

EricM Silver badge

Reading the comments here, on ArsTechnica and Heise seems to indicate...

... AI is dead in the water already.

The technical level not only is no longer impressed but does see AI mainly as a risk.

Not for taking over the world, but for shooting them in the foot...

And I see first signs, that leaders start listening and noticing lack of real results from AI.

Microsoft giving boneheaded management a tool to "monitor usage" of AI by their minions is IMHO a sign of panic...

AI gets more 'meh' as you get to know it better, researchers discover

EricM Silver badge

Re: Chatgpt was useful to me

Yes LLMs can be useful to experts and non-experts alike, just like a Search Engine occasionally turns up an interesting link discussing the problem at hand from a new angle or with new information.

However, that is not the idea that makes LLMs seem worth billions of dollars...

The billion-dollar use case, replacing white collar labor with AI, is currently about as viable as replacing those seats with Google Search licenses.

AI companion bots use emotional manipulation to boost usage

EricM Silver badge

Re: "emotional manipulation"

Yep resorting to "AI companions" seems to fall into the same category as treating your own injuries Rambo-style with some recently used hunting knives and dirty bandages .

The more one is in need of emotional support, the less these tools seem to be advisable.

FCC kicks off 'Space Month' with vow to fast-track satellite licensing

EricM Silver badge

One lesson to be learned from the current U.S. administration:

If you can't win, just invent your own reality and talk about it, as if it were real.

As Trump shows every day, this works just good enough for many idiots to follow him along ...

Big money is nervous about AI hype, but not ready to call it a bubble

EricM Silver badge

Re: dot com crash

Funnily enough, this time around housing markets look like a pretty safe haven to weather out the AI "BOOM" ...

EricM Silver badge
Thumb Up

Re: It's like heating a pressure cooker beyond its design temperature

Thumbs up, mate.

Unfortunately I have earn my money in this industry for some more years - so I would clearly prefer it not to self-destruct crushed by AI-related debt before my retirement :)

EricM Silver badge

Re: It's like heating a pressure cooker beyond its design temperature

Not sure... never underestimate the greed of already insanely rich people...

Especially if they were early adopters of "vibe investing" ....

EricM Silver badge

Re: dot com crash

> We might see the same with AI subscriptions going much higher from the ones that manage to survive. ?

Surviving AI will (need to) be much more expensive. At the current rates, the AI operators lose money on each customer.

So AI will become expensive - maybe too expensive to replace all the jobs it currently promises (or threatens, depending on your wealth) to take over.

EricM Silver badge

It's like heating a pressure cooker beyond its design temperature

The longer it will take to "pop", the louder the bang will be...

The AI bubble is different in one respect from earlier economic bubbles, namely that the path companies go to try to make AI at least issue somewhat "correct" answers seems just to be: "make it bigger" for the past several years.

This leads to the need of accelerated spending across the AI supply chain (accelerators and data centers, both age fast and require short amortization time frames, lots of energy), forcing the industry into an exponential financial (and environmental) trajectory - to the "pop".

Just try to notice how often you nowadays hear the word "trillion" from companies yet to make _any_ profit trying to make the already sunk cost look reasonable.

The longer this sunk cost fallacy is maintained, the harder the crash will become.

I'd be surprised, if there would not be a major correction before Q2/2026.

Ladies and Gentlemen, fasten your seat belts, please.

AI: The ultimate slacker's dream come true

EricM Silver badge

Maybe it's just me ...

But cleaning up behind AI slop on many levels and bringing overly optimistic expectations of AI prototypes in line with reality feels like it creates more work or make existing work harder.

OK, only in case you care for actual results instead of being happy to have new non-deterministic AI components built in a new business or technical context, just for the sake of it...

AI chatbots that butter you up make you worse at conflict, study finds

EricM Silver badge

Agree. And let me add, that not only the damages to the economy, but also the number of ways AI damages society keeps growing.

Google goes straight to shell with AI command line coding tool

EricM Silver badge

I must admit I experience a lowering tolerance for "AI" related news, that tell me another area/interface of computing has been infected by these sometimes-functioning and not completely understood environmental liabilities. This one is even more disturbing than forcing "AI" on every Chrome user...

Could we please speed up that burst of the AI bubble a bit?

Pentagon decrees warfighters don't need 'frequent' cybersecurity training

EricM Silver badge

Re: Wars that the US has won ?

> The 1861-65, part 2 of that arguably first US civil war was their greatest victory

I fully expect the current regime of racist and misogynistic US "leaders" to understand themselves in the tradition of confederate slave holders.

As such they probably would say they actually lost that specific civil war, or - given their desire to play the victim - probably more like "Victory was stolen by Union forces".

EricM Silver badge

Yep, but this is obviously completely unrelated to the secretary of war (on logic, morals and honor ) having missed (or not having understood) his own cybersecurity trainings...

But hey, that's what you get voting for in idiot who fills his cabinet with boneheaded TV personalities, used car salesmen and Yes-men and -women.

Salesforce pickin' up good vibrations

EricM Silver badge
FAIL

I'd like to officially propose a name change to Dunning-Kruger-Coding

Vibe coding is the latest rehash of the low-code/no-code ideas of the past, that all miserably failed or were confined to areas where the cannot do that much damage (e.g. MS Office VBA).

The only difference introduced by Vibe Coding is that you now literally do not need to understand anything about the complexities of your environment, but can have the machine generate code anyway. What the effect and consequences of that code is, is often unclear to the end-user, especially if this code is added to existing environments of which neither the user nor the AI have a full context, but hey, they have created code that does something.

It can in fact be likened to Dunning-Kruger Syndrome: The less you actually know, the more power you assume you gain by "vibe coding" and the less you can estimate the risks your "results" pose.

Letting people "work" this way on a shop floor, in a chemical plant or a in mechanic workshop would lead to injury and death.

In IT it will probably "only" lead to wasted hours, lost data, security leaks and downtimes. At least in most cases.

Cybercrims claim raid on 28,000 Red Hat repos, say they have sensitive customer files

EricM Silver badge

Re: What hope is there for average Joe?

Basically: In every instance you mentioned: Complexity killed.

Therefore, to "do better cyber security":

Reduce complexity and put "keep it simple" back on the priority list.

Avoid complex and convenient all-in-one solutions that promise to integrate everything with everything "seamlessly" (example: Office365, Entra ID, Azure).

Avoid complex runtimes that could scale (usually) far beyond your needs but introduce their own set of problems (example: Kubernetes).

Avoid snake oil security products that promise to make you secure by just installing them (example: _every_ big Cybersecurity vendor).

Deploy only, what is essentially needed, keep the number of technology dependencies to a minimum. Manage your SBOM by starting to reduce it with priority.

Deploy only solutions your team is able to fully understand.

Don't "manage" the remaining, now fully understood security problems, solve them.

A reasonable level of security is not "nigh impossible" but can be damn inconvenient... You act slower, more expensive and deploy less new "solutions", that are less "integrated"...

An organization needs to accept that trade off - instead of pushing for the deploy of the latest and shiniest tools and gadgets.

Google's dev registration plan 'will end the F-Droid project'

EricM Silver badge

Now Google starts _exactly_ the behavior that made me avoid Apple ...

@Google: No, you don't have any business to tell me what I can and cannot download and start my device. If I decide to download an APK from whatever source I make that decision, not you.

That said, the examples where you cannot load/run state- or bank-mandated apps on non-certified Android clones like LineageOS are numerous in the EU.

Will be interesting to see how the EU will react to Google enforcing a Gatekeeper position for itself between the EU member states and their citizens.

SIM city: Feds say 100,000-card farms could have killed cell towers in NYC

EricM Silver badge

Re: "a pair of European youths"

If they remotely operated a SIM booked into an US carrier it would appear as a national call.

That's actually what these SIM farms are for ...

NASA panel fears a Starship lunar touchdown is more fantasy than flight plan

EricM Silver badge

Technically, landing a man on the moon, then having his spacecraft tip over and shred him to bits in the following explosion will probably not even be regarded as a full successfull "landing".

So getting just the very fist bit only - the touch down - working might not be good enough in this case.

Page: