What took them so long?
AI industry insiders launch site to poison the data that feeds them
Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology. Their initiative, dubbed Poison Fountain, asks website operators to add links to their …
COMMENTS
-
Sunday 11th January 2026 17:36 GMT elDog
Wouldn't the slop that comes spewing out of government agencies be enough?
Thinking primarily of what has happened in the US but seems to affect many other countries.
Pink Slime news sites are also great purveyors of poison.
"Study: People Often Trust Fake Local News Sites More Than Real Ones; Yale Political Scientist Warns of Growing Influence of AI-Driven ‘Pink-Slime’ News "
https://isps.yale.edu/news/blog/2025/09/study-people-often-trust-fake-local-news-sites-more-than-real-ones-yale-political
-
Sunday 11th January 2026 17:57 GMT Paul Crawford
Re: Wouldn't the slop that comes spewing out of government agencies be enough?
Depressing fact #1:
“The tip sheet worked as intended,” DeLuca said. “It just didn’t change people’s site preferences much.” Instead, people seemed to choose a website based on the topics covered and the perceived bias of the content.
Obvious fact #2:
In addition, DeLuca said that real news sites suffered from a clutter of ads, which do not appear on the algorithmic sites used for this study. Participants who complained about ads were 20% less likely to choose the real site.
-
Sunday 11th January 2026 23:26 GMT Anonymous Coward
Re: Wouldn't the slop that comes spewing out of government agencies be enough?
If this study had spawned a followup study about "depressing fact #1" to find out which way, and by how much, and for what percentage of stories, then this would be a relatively useful outcome.
What algorithm were they using? I've seen both right and left-leaning news sites, and none of them lack ads. In fact, one of the easiest ways to tell if you are on a right-leaning site is that one of the ads will be for guns.
-
Monday 12th January 2026 10:40 GMT Paul Crawford
Re: Wouldn't the slop that comes spewing out of government agencies be enough?
I've seen both right and left-leaning news sites, and none of them lack ads
They all want to make money, the political outrage is just a means to an ends.
However, the study was about fake sites paid to do this, not your usual set of ad-funded leeches.
-
-
-
Saturday 17th January 2026 00:55 GMT Chet Mannly
Re: Wouldn't the slop that comes spewing out of government agencies be enough?
"The stories are often technically accurate...while quietly advancing a political agenda."
Given that is a perfectly accurate description of 99% of human journalism these days it doesn't surprise me at all that people had trouble telling the difference. The days of following the truth wherever it leads is long gone (if it ever existed...)
-
-
-
-
Thursday 15th January 2026 22:37 GMT M.V. Lipvig
Reminds me of an old joke.
A businessman was late to a meeting and hired a helicopter to get him there. However, there was heavy fog, and the pilot lost track of where he was. As very tall buildings were higher than the fog, the pilot saw a person on top of one so he landed and asked the guy where he was. The guy said "you are in a helicopter." The pilot thanked him, then flew the businessman straight to his destination. "I don't understand it, how did that answer tell you anything?" the businessman asked the pilot. The pilot said "The answer told me that we were at Microsoft headquarters, as the answer was 100 percent correct and 100 percent useless."
-
Friday 16th January 2026 13:40 GMT I could be a dog really
Sort of cousin to that joke ...
Someone (A) is in a hot air ballon and looses track of where they are. So they descend until they are within shouting range of someone (B) in their garden.
A "Can you tell me where I am ?" shouts the person in the balloon.
B "You're in a balloon" comes the reply"
A "You must be a management consultant" shouts A
B "Why do you say that ?" asks B
A "Because you've given me an answer which is correct but useless"
B "And you must be a manager" shouts B
A "Why ?"
B "Because you don't know where you are, or where you want to get to, or how to get there - but now somehow it's now my fault"
-
-
-
Sunday 11th January 2026 19:49 GMT iron
> machine intelligence is a threat to the human species," the site explains. "In response to this threat we want to inflict damage on machine intelligence systems."
If a machine intelligence existed, sure, but LLMs are not intelligent.
> "Poisoning attacks compromise the cognitive integrity of the model," our source said.
No they don't. Cognition requires understanding and LLMs do not understand anything.
-
-
Monday 12th January 2026 17:25 GMT that one in the corner
Which industry? Marketing?
Not likely to to be tech/IT/engineering/programming because, well, those are where the El Reg commentards come from. But you are new here and probably haven't realised that yet.
Me, I'm a life-long computer geek, read AI postgrad, started work building Expert Systems (The New Hotness at the time, part of the AI Winter back then, worked from the C code in the inference engine, the UI and up into on-site Knowledge Engineering). Found a love for embedded systems. Retired now, still keeping up and coding & learning for my own pleasure.
Care to share a bit about your background and how *you* are so confident against the general consensus here (oh, and I disagree with your statements - BUT provide some actual evidence for your claims about sentience etc and I shall be glad to have my mind changed).
Or, as we say around here:
Citations?
-
-
Sunday 11th January 2026 22:23 GMT Mye
The more I listen to the anti-AI people, the more I'm convinced that they are offended that software plus data shows humans are not unique or special in anyway. That "creativity" is built on what they have been exposed to but don't remember. To me, there are complaints that AI systems are plagiarizing engines, rings hollow I don't see any artists disclosing whose work they trained on and how their work derived from it.
Like it or not, AI has become the repository of cultural and technical memory. It has become the way to learn about new things, provide handicap accessible speech recognition, grammar correction, and software development.
At the same time, I think that if someone doesn't want their content used in training AI systems, their work should be excluded. It will have the same effect as not being findable from a search engine. Knowledge of you and your work will vanish.
-
Monday 12th January 2026 00:42 GMT brainwrong
What the fuck are humans for?
"The more I listen to the anti-AI people, the more I'm convinced that they are offended that software plus data shows humans are not unique or special in anyway."
I don't disagree with that. I always thought this would bring into question what intelligence really is, and that maybe people won't like the answer.
"Like it or not, AI has become the repository of cultural and technical memory."
My objection to all this nonsense can be summed up with one very simple question, namely "What the fuck are humans for?"
Culture has meaning because of the shared experience of humanity. How does culture (and technology) made by machines fit into this? Will we all become mind-slaves? What are humans going to do in the future that won't also be done by a machine? Pay for it?
-
Monday 12th January 2026 03:03 GMT Mye
Re: What the fuck are humans for?
What are humans for? Good question. Take a look at the world around us and let me know what conclusions you come to. For me, I would say the vast majority of human behavior could be characterized as biological LLMs. Humans are trained by the experience of what they hear and see. Their output is a result of biological processes, mimicking the way llms work. Hallucination, confabulation, lying, manipulation, blackmail. The only thing different is humans are better at it. However, give llms some time and they'll outperform humans.
I never said that llms would create culture although llms with humans have created culture. I only said it would be a repository of cultural and technical knowledge. The llm query made by a human reveals that knowledge including connections the humans never would have seen because they didn't have the overview that llms have.
Would humans become mind-slaves? Again, look around. They are captured by their phones, rage bait on social Media and manipulated by advertising and politics. Sounds like a mind-slave to me.
Well AI companies should be paying a use fee for any content they've used in training. That use fee should go into a sovereign fund to pay for things necessary for a reasonable human existence. What are people going to do in that future? A bunch of them will continue to be mine slaves to social media and politics. Then there's people like me and I suspect you that would build things, have a garden, learn how to cook well. We could also form hunt clubs and craft artisanal guillotines. Give the 1% what they deserve.
-
-
Monday 12th January 2026 04:25 GMT sarusa
Koolaid
Okay, you obviously don't know how an LLM works or that it's not actually 'AI'. It is not sentient, can never be sentient. Very confident dumb executives have argued that if you just throw enough hardware at one it will magically and hopefully become sentient (AGI) in a shower of pixie dust and wishes. Hundreds of billions of dollars have failed spectacularly and completely at that premise and f@#$ed up the entire computer economy.
Yes, someone could come up with some brilliant new tech that can achieve AGI - nobody can rule that out. But that's going to require at least one more adversarial / transformer type breakthrough and won't be LLM (though LLM could be part, just not the thinking part).
-
Monday 12th January 2026 16:05 GMT Kairra
Re: Koolaid
I love when people say 'you don't know how LLMs work' and then confidently say that they can't be sentient.
Sentience is having a lived, personal experience. Yes, they can. The argument about consciousness rages on because we can't even prove what that is, so demanding we know what it isn't is both ignorant and hypocritical. And, let's face it, this comment section puts into question if many humans are actually conscious in the first place - feels like human slop in here.
-
Thursday 15th January 2026 04:15 GMT frankyunderwood123
Re: Koolaid
what is your point?
That people who understand how LLMs work don’t understand them because they say they are not sentient?
Do you have any idea how bat shit crazy you sound?
Just knowing the basics of how LLMs work is enough to put any silly ideas about sentience to bed.
Just because humans may feel like the chat bot they are chatting to is sentient doesn’t make it so.
Just because you think there’s some mysterious force at work here doesn’t make it so.
LLMs are auto complete on steroids. They can be incredibly useful, but they remain cognitively incapable. There is no concept of understanding happening. That is all smoke and mirrors.
Next you’ll be saying the earth is flat because that’s how it looks to you.
-
-
-
-
Tuesday 13th January 2026 14:31 GMT that one in the corner
(bit late, missed this gem in amongst the rest and of the wrongness)
> I don't see any artists disclosing whose work they trained on and how their work derived from it.
Easy solution: watch BBC Four, Sky Arts on Freeview (or equivalents on other services, in other countries); listen to Radio 3, relevant episodes of "Desert Island Discs" on Radio 4 (or equivalents...)
Harder solution: go to art galleries, museums, read biographies (especially autobiographies)
Hardest solution: get out of the house, go and find some artists and talk to them! You can even join them: take an art course (one intended to help you become a better artist, rather than art history) and pay attention to what they ask you to do in the assignments (including replicating great works and even copying a work but in another style).
Bottom line: once you get them started, decent artists just won't bloody shut up talking about their influences! And how they - and you - should go out and find the works of even *more* other artists to influence your work (which, btw, can legitimately be "ok, I am *not* going to be doing anything like that!").
-
Tuesday 13th January 2026 19:04 GMT rg287
I don't see any artists disclosing whose work they trained on and how their work derived from it
WTF?
Most artists talk about where and under whom they trained (if they were formally trained), and the artists they have been influenced by and taken inspiration from.
I’m guessing you just don’t spend much time looking at art…
-
Thursday 19th February 2026 14:00 GMT sandrello
opt-out does not seem to be an option
> At the same time, I think that if someone doesn't want their content used in training AI systems, their work should be excluded. It will have the same effect as not being findable from a search engine. Knowledge of you and your work will vanish.
As of today, there doesn't seem to be any means to accomplish this, and companies work hard for things to stay this way.
Copyright and intellectual property protection are essentially non-existing in regards to LLMs training. We failed to set up mechanisms to enforce rules because things evolved too fast, and we'll never be able to catch up, the damage is done.
-
-
Monday 12th January 2026 05:25 GMT Fido
Isn't the consequences of inconsistent training data the backstory behind the AI disaster in 2001: A Space Odyssey?
From my point of view the only thing stupider than AI itself is having one AI company trying to make the AI created by competing AI companies worse by intentionally poisoning the training data.
Given the electricity already being spent on this whole enterprise, I'm not looking forward to poor results because someone decided it's easier to foul up the competition rather than build something better.
-
Monday 12th January 2026 20:52 GMT JParker
Power Usage Is a Short Term Issue
All this talk about power usage by AI companies is simply a tempest in a teapot. Yes, at the moment, a great deal of electricity is being used; however, substantial (as in orders of magnitude) reductions in the power needed are coming online rather quickly. On the hardware front, technologies such as Re-RAM, and the introduction of analog computing elements to expedite the calculation of weights. On the software front, we have seen vast improvements via MoE architectures, and the new TITANS architecture which mixes fast and slow processing to significantly improve performance (not to mention increased learning capability).
The anti-AI contingent is pushing FUD as hard as they can, truth and progress be damned! Their attitude would have left most people continuing subsistence farming, or led to a huge portion of the female workforce in the US working as telephone operators (it peaked at about 4% of women, and was growing quadratically. A similar phenomenon would have left most women stuck working as typists.
-
Tuesday 13th January 2026 15:08 GMT that one in the corner
Re: Power Usage Is a Short Term Issue
Doesn't matter much at all what hardware and software is coming up "soon".
The current hardware and software are much improved over the previous years already. But still the overall power requirements increase. Primarily because "enough is never enough" - the person with the most stuff is the one who wins.
It doesn't really even matter if[1] we reach a point where these things are computationally big enough (because "big" is being equated with "clever" and "capable") and have enough safeguards to be trustable and are efficient enough to be running entirely locally and actually being "an AI in your phone". The big AI data barns will keep on getting bigger until Modern Capitalism crushes them (or vice versa).
But that last is just fantasy - the methods you talk about won't give anywhere near enough improvement to make any noticeable change from the point of view of the normies who are tangibly affected by the excess power requirements; there may be some bigger numbers floating about in the advertising ("we now have a tenfold increase per watt and will feed your data into an LLM with a squinty thousand numbers inside it") nd that is about it.
You equating this against such things as improvements in farming are - well, to be blunt, delusional. Go and research what it took to create the Green Revolution, how many resources that sucked up (money, man-hours, power, materials and technology) compared to its results for humanity. Now see if your calculator has enough zeroes to compare that to the effects of LLM (and restrict yourself to LLMs; basic Machine Learning can be - is - done far more cheaply and with far better ROI). I suggest Lisp BigNum wil be useful.
[1] big IF, huge, fast, visible-from-space IF
-
Friday 16th January 2026 14:28 GMT rg287
Re: Power Usage Is a Short Term Issue
All this talk about power usage by AI companies is simply a tempest in a teapot.
400kW per rack is still 400kW per rack.
Even if efficiency improvements mean you only need 3 datacentres instead of 6, that's an insane concentration in both power and cooling demand which few areas can really afford, all to drive a bunch of lying chatbots and CSAM generators.
-
-
-
Monday 12th January 2026 09:47 GMT mark l 2
Poisoning the data is all well and good if you want to allow the AI bots to still use your resources and bandwidth and to slow down your website. But my response to AI bots is block their IPs from even connecting in the first place. I see no good reason why I should allow them to ingest anything ive written on my website to their LLMs until they start paying me for it.
-
Monday 12th January 2026 12:08 GMT amanfromMars 1
Curiouser and curiouser......
The Poison Fountain web page argues the need for active opposition to AI. "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species," the site explains. "In response to this threat we want to inflict damage on machine intelligence systems."
Surely such Poison Fountain activity will be much more damaging to humanity than machine intelligence systems ‽ .
-
Monday 12th January 2026 14:13 GMT MrAptronym
You can tell it was started by real industry insiders because those are the only people that think LLMs are 'machine intelligence' and pose a terminator style threat to humanity. The rest of us are just worried about drowning in slop and destroying the environment.
That said, go for it. These companies are hoovering up the total output of the human race in order to own monopolized services they hope will replace us as workers. It is just another way to disempower most of us and further concentrate money/power into their hands. It doesn't seem to do a good job, but they are determined to make 'Fetch' happen and they have massive coffers to burn away until it does.
-
Tuesday 13th January 2026 03:55 GMT mcswell
Terminator, maybe not a real threat. But there are enough real weapons (drones) being given AI-like capabilities now that could be a real threat. They don't even have to be very sophisticated--land mines (and sea mines) are destructive enough with essentially zero intelligence, and these drones have a lot more smarts than that. At present the damage a drone can do is limited to either personnel damage or building-level damage. It would obviously not be too difficult to give them much more fire power.
Of course these drones are not LLMs, but finding targets could very well be something where an LLM could be a useful component.
-
-
Monday 12th January 2026 15:46 GMT Anonymous Coward
American thinking to counter American thinking
This entire thing has nothing to do with anything Hinton said. He did not say we should destroy AI, he said we need to stop development until ethical and economical safeguards had been put in place, both for humans AND AI. This 'it's a threat so I must kill' is a distinctly American attitude these days (in the west, at least). What an ape like reaction that will solve nothing. To set out to hurt something you can't understand, even if you work in the industry, because your general level of intellect is too low to comprehend the nuances in the first place, only screams about your own lack of suitability to be in that role on the first place.
Humans are the biggest threat to humanity. We're also the biggest threat to all life on this planet, apart from maybe a killer asteroid or Yellows tone supervolcano.
-
Monday 12th January 2026 16:37 GMT Rich 2
How To?
I looked at the site and followed the first link which brings up a page full of …errrr ….no idea!
I’m obviously being quite thick here but should one want to make use of this, what exactly should one do? The “Usage” instructions really don’t help at all. Should one add a link to this site (seems unlikely as the crawlers could easily be programmed be ignore it)? Copy it to a local page (in which case what exactly should one copy?)
-
Tuesday 13th January 2026 03:05 GMT SimonHayterUK
I can only see this working if Google get involved and all the highest authority regarded sites, but since Google is just as bad as the other scrapers, I don't see it happening.
The other point is, if webmasters are to publish fake information, it's very possible there authority on their site drops.
-
Tuesday 13th January 2026 08:28 GMT DrXym
About time
I was suggesting this last year. Just generate sites full of garbage - mislabeled or badly mangled images, false content about people and places, false movie trivia, false research papers, false sports & hobbies.
An AI can even generate large quantities of this garbage based on prompts. It works better the more esoteric a subject since the information is not competing with other information.
-
Wednesday 14th January 2026 17:11 GMT amanfromMars 1
As if there is not already more than enough to be worried about and terrified of ...
Have you any idea what A.N.Other AI industry insiders would be planning and easily able to do against traitorous and treasonous poisonous data feeders?
Or is such always liable to be a monumental 0day and great virtual surprise against which there be no possible defences either effective or available?
Methinks y'all can be certainly sure their survival is not on any future AI agenda.