A distillery?
AI complaining about being copied is a bit like the British Museum complaining about thieves.
Anthropic, riding a wave of goodwill after resisting demands from the US Defense Department to soften model safeguards, is reportedly planning to go public as soon as Q4 2026. That may not be soon enough to avoid the undertow of financial pressure, competition from China, and the challenge of delivering AI models that provide …
I doubt it would be argued successfully, but the Chinese who copied the Anthropic models could go with the fruit of a poisoned tree. Anthropic's model is stolen goods, therefore they cannot come to court as they have unclean hands. At least I think I have it right, I may be mixing up defenses. I am NOT a lawyer!
"Anthropic's model is stolen goods, therefore they cannot come to court as they have unclean hands. "
Chinese companies winning against US companies in a US court? On Intellectual Property?
I seriously doubt it. Just as I doubt whether a US company would win a case in a Chinese court.
IANAL either but I think that you're slightly wrong. AIUI, it's about validity of evidence: if it's gathered illegally then it can't be used against the accused.
I have heard of cases of prosecutors doing such a thing and then using a "parallel" chain of evidence that supports the claims without becoming tainted by the original evidence gathering processes... but that's a whole different kettle of fish.
you ask them about Taiwan, or Tiananmen Square, or the so-called "One China Policy", or want to argue about the "9 dashed line". The Party's propaganda line is there, if you go looking for it. Some of them have the censorship filter after the response (such as chat.deepseek.com, which, when I asked what politician Winnie the Pooh looked like, gave me the beginning of an answer, and then switched it immediately to their "beyond the scope of this chatbot" response once it detected "Winnie the Pooh" in the output) so the 'open-weight' versions run by outside companies don't have the same censorship built-in.
Funny, and sad, that they don't think their political system can withstand US-style freedom of speech. If it's really such a great way of government, it shouldn't matter at all what the public says about it.
The censorship point is valid but it's surface-level, and it rests on an assumption that doesn't survive much scrutiny: that Western democracies actually have meaningfully free discourse.
During Covid, Western governments actively pressured social media platforms to suppress content that later turned out to be legitimate debate - lab leak theory being the most obvious example, which went from "banned misinformation" to "plausible hypothesis" within about eighteen months. That wasn't the CPC. That was the US government coordinating with private companies to shape what could be discussed publicly, as documented in the Twitter Files and confirmed in congressional testimony.
And beyond state action, the West has its own censorship apparatus - it's just privatised. When a handful of billionaires own the major media outlets and platforms, they decide what's acceptable discourse. The phenomenon of "cancellation" isn't mob justice - it's editorial control exercised by people with economic power over people without it. You can technically say whatever you want. You just can't say it anywhere anyone will hear it, and you might lose your livelihood if you try. That's not the gulag, but calling it "freedom of speech" is generous.
The deeper issue though isn't free speech - it's what the whole system actually does. Western liberal democracy has evolved into a machinery that systematically extracts from the working class to feed capital. Real wages have been stagnant for decades while productivity gains flow to shareholders. The tax structure has shifted onto employment income and consumption while corporate effective rates have dropped. Home ownership is collapsing among younger generations. Small business formation is in long-term decline as market concentration grows. Social mobility in the US and UK has measurably declined - born poor, statistically staying poor.
The working class has essentially been turned into a tax proxy - they earn just enough to be taxed and to consume, which funds the infrastructure and legal systems that corporations use to generate returns that never flow back down. The rich get the profits, workers get the bill, and the whole thing is wrapped in "freedom and democracy" branding.
So yes, it's funny that DeepSeek censors Winnie the Pooh. But maybe spend less time laughing at someone else's propaganda and more time examining your own - because the story that you live in a free society with meaningful opportunity and open discourse is doing a lot of heavy lifting right now.
It's not whataboutism. It's questioning the premise.
The argument assumes the critic is speaking from a society with functioning free speech and democratic accountability. I'm saying that's largely an illusion.
In Western democracies, policy is not shaped by voters. It's shaped by whoever is sharing wine and steak with freshly minted ministers the week after an election. Campaign promises evaporate on contact with office because the people who actually decide what gets implemented aren't the electorate - they're wealth managers, lobbyists, and donor networks. There's published research on this - Gilens and Page at Princeton found that the preferences of average citizens have near-zero statistical impact on policy outcomes, while economic elites and organised interest groups get what they want with striking regularity.
The media that's supposed to hold power accountable is owned by the same class of people it would need to scrutinise. Articles on sensitive topics routinely have comments disabled. Content that's inconvenient for the owning class gets algorithmically suppressed. You won't go to prison for posting an opinion, true - but it'll be shadow-banned, demonetised, or buried so nobody sees it. The outcome is the same as censorship, it's just plausibly deniable.
The poor in the West functionally don't have a voice. They can vote - between two parties funded by the same interests. They can speak - on platforms owned by billionaires who control the algorithm. They have freedom of speech in the same way they have freedom to buy a yacht.
So when someone points at DeepSeek censoring Winnie the Pooh and says "look how unfree they are" - sure. But at least that censorship is visible. Ours is wrapped in so many layers of platform policy, algorithmic curation, and manufactured consent that most people don't even recognise it's happening. I'm not sure which is worse.
"It's shaped by whoever is sharing wine and steak with freshly minted ministers the week after an election."
Sorry, but in global democracy and transparency rankings, the USA has over 2 dozen countries ranked above it.
Not every country is as corrupt as the US.
And who funds those transparency rankings? The same constellation of Western foundations, think tanks, and NGOs backed by the same capital class that benefits from the current arrangement. Transparency International's donor list reads like a who's who of Western government agencies and corporate foundations. They're essentially grading their own homework.
Funny how the countries that rank highest on these indices also happen to be the ones whose financial systems enable the most sophisticated tax avoidance structures on earth. The Netherlands is perennially near the top of transparency rankings while simultaneously operating as one of the world's largest corporate tax conduits. Denmark scores beautifully while its pension funds quietly profit from global labour arbitrage. It's not that these countries aren't corrupt - it's that their corruption is so structurally embedded it doesn't register as corruption. It just looks like policy.
The difference between a bribe and a lobbying fee is essentially a receipt.
Amazon. It's on the donor list. A company with a well-documented history of aggressive tax avoidance and warehouse labour practices that are currently under investigation in multiple jurisdictions is funding the organisation that grades countries on corruption. I didn't think I needed to spell out why that's suspect.
Looks like Amazon is a corporate donor, with such donors collectively counting for less than 1% of Transparency's income according to the pie chart on that page.
It seems unlikely to me that such a small donation would have any undue influence on outcomes of international investigations.
The rally Cry of the Conspiracy Theory Peddlers is Trust No One
They use it as a badge of honor that they reject all sources of information as all information is corrupted by the Big Conspiracy.
But if you trust no one, you end up like Descartes with only one certainty: I think, therefore I am
There is not much to build upon that.
So every Conspiracy peddler who says he doesn't trust the MSM has to get his "information" from somewhere. That "somewhere" is invariably some talking point from some random podcast or Xitter handle which never leads to falsifiable sources, ie, something you can actually verify.
So trusting no one invariably leads to trusting the wrong ones. Which is Mission Accomplished for the conspiracy peddlers.
"Funny how the countries that rank highest on these indices also happen to be the ones whose financial systems enable the most sophisticated tax avoidance structures on earth."
Indeed, but they also collect the most taxes as a percentage of GDP and have pretty low Gini coefficients.
So their tax avoidance and bribery didn't work out like they normally do in eg, the USA.
Those countries collect high taxes - overwhelmingly from labour and consumption. The corporate profit shifting through their jurisdictions benefits multinationals that are largely not paying into those same domestic tax bases. The Netherlands collects plenty of tax from Dutch workers while serving as a conduit that drains tax revenue from poorer countries. The low Gini coefficient is domestic comfort funded partly by exporting inequality elsewhere.
It's a laundering operation with extra steps. The citizens live well, so nobody asks uncomfortable questions about why Shell's royalty payments are routed through Amsterdam.
And yet the practice continues. That's precisely the point - you're allowed to air the programme, write the article, ask the question. Nothing changes. The conduits stay open, the royalties keep flowing through Amsterdam, and everyone feels virtuous for having had the debate. The freedom to complain about the problem is functioning as a substitute for actually fixing it.
>The argument assumes the critic is speaking from a society with functioning free speech and democratic accountability.
No, it doesn't. The argument assumes the critic is speaking from a society with more free speech and democratic accontability than China. Nothing more, nothing less. Everything is relative, absolutism is a trap.
You can absolutely argue that free speech and accountability issues exist in the USA and in the West in general, and I'll even agree wholeheartedly.
Showing that those issues are greater-or-equal than the same issues in China, on the other hand, seems nearly impossible.
"more free speech" is doing a lot of heavy lifting if the additional speech you're free to make has no material impact on how you're governed. The question isn't just whether you can say things - it's whether saying them matters.
If that speech has been structurally decoupled from power, what exactly is the practical difference? You're free to shout into a void that has been carefully architected to absorb the noise without anything changing.
Two cheeks of the same arse.
This is going to get so downvoted, alas.
I've tried to make similar arguments about much simpler and more tech oriented stuff but I've just run into a "Four Legs Good, Two Legs Bad" brick wall. I've put it down to a condition process that's always been with us but turned particularly pointed starting about 40 years ago when the media all started singing from the same hymnal. If you live inside this bubble it would take some effort to push back against this, to maintain a perspective. I had it easier because I moved country to the US about that time so I had the benefits of not just being outside the bubble but having different perspectives.
One thing that I think is difficult for contemporary British people to fully grasp is just how irrelevant the UK is now, especially post-Brexit. With regard to China it still carries on as if its the mid to late 19th century, expecting it to be treated as some kind of important world power and constantly being surprised -- and not a little offended -- when its not. China has vast resources, both economic and human, and properly organized, and given enough time, these resources can duplicate and beat anything else in the world. Put simply, they've got a middle class that's bigger than the entire US population, they're awash with highly educated (and very smart) people. Instead of recognizing this and learning to live with and compete with it (because the Chinese seem to be smart enough to know how to avoid the kind of imperial hubris that just about ruined them during their 'century of humiliation') we still beat the drum for imperial domination, the Menace of EastAsia. Its this, not Chinese subterfuge or military might, that's going to bring us down -- and the crime is that we're doing it to ourselves, they don't have to lift a finger.
As if Musk's AI is any better, what with him reprogramming it to be a good MAGA soldier anytime anyone calls his attention to it telling an uncomfortable truth that doesn't conform with Herr Trump's latest blatherings.
Every AI is going to inherit the biases of its "owners" by virtue of what training data they give it, and what guardrails they attempt to provide for what it is or is not allowed to do or say. Calling out China's AIs for hiding the truth about Tianamen Square is pretty silly when the glass house inhabited by those of us in the west with our supposedly "free" AIs is just as breakable. We just don't have as obvious of a test available to see what it is hiding.
If you're arguing that all the US AIs are as censored as the Chinese ones, I'm not sure how you're justifying that. You ask a Chinese AI using its official interface about Tiananmen, and it will either claim that whatever it was ('the tank guy' for example) never happened, or that Chinese people agree that the response to the protest was reasonable and justified. Or it will refuse entirely to discuss certain political issues.
US AIs will occasionally refuse to discuss politics or other 'sensitive' topics, but I've never seen one try to argue that historically documented events 'didn't happen' just because the US government disagrees with the interpretation of said events.
I'm not saying they are as censored, just that they ARE censored, and to a greater degree than any of us would be comfortable with if we knew the full extent of it.
That's just it - we don't know. We don't have a simple test like we do for a Chinese AI to determine if it is censored, and there isn't likely to be a single question they'd all be censored for since the goals and biases of Musk aren't the same as Altman whose aren't the same as Zuck and so forth. I guess some will see that as an advantage, because one of them is probably going to give you a true uncensored answer even if others don't. But what if you aren't able to recognize what the "true uncensored answer" is? If you decide to accept Grok's output because you like Musk's politics, you don't know when it is lying to you, and if when it lied to you you tried another AI and if conflicted with Grok's answer you're probably gonna believe Grok and stay in the dark about the truth.
Well... I don't really know how to solve this problem of detecting subtle hidden censorship that essentially "nobody can ever find". Proof of the non-existence of something so nebulous as 'well hidden censorship' is impossible. If someone happens to find some particular form of it, well you can just claim that's not the hidden censorship you were talking about and there must be more.
Seeking hidden conspiracy in everything is a very difficult way to go through life, and I don't recommend it.
Check out Paul Nakasone and his previous jobs.
Retired four-star general in the United States Army who served as the commander of United States Cyber Command. Head of the National Security Agency and Chief of the Central Security Service. US Second Army.
Also joined Open AI's board with a priority focus on Open AI Safety and Security Committee for ChatGPT. Personally I think I like the guy, but let's be serious about our own freedoms.
> Funny, and sad, that they don't think their political system can withstand US-style freedom of speech.
Are you saying that their equivalent of the FCC doesn't threaten to remove the broadcast license of news channels that don't report favorably everything the government does?
Don't they also deport people who criticize Israels genocide?
Yeah, it's a good thing they're not all white-trash like Grok and ChatGPT. Their focus on safety is quite appropriate seeing how the hazard of AI-psychosis lurks around every corner of the tech's usage, and they recently added pretty chartmaking ...
Inasmuch as safety comes in through guardrails that are post-processing add-ons to the principal generative subsystem (iiuc), it shouldn't be too hard for them to offer a de-guardrailed model (go kart, dune buggy, dragster, ...) to 'accredited' security researchers imho. Direct costs (to users) may remain higher than from cheap Chinese knock-offs, but should still beat the very real indirect costs of seeing all that arduous pentesting work stealthily siphoned back to the mother ship and 0-day applied right back against our own peoples.
It's as the age-old Zen (and the art of motorcycle repair) saying goes: if your work/data/head is worth $0.27, then use a $0.27 model/harness/helmet -- YOLO - LO - LOLO - LO - LOLO - LO ... splat! ;(
> This individual views the lack of transparency by US commercial AI companies as a problem. "They say it's an existential threat but then demand unaccountable control of them?"
They built the LLM, they run the service - why shouldn't they have control over the LLM without being "accountable" to a random person?
Like every other business, Anthropic are accountable to their shareholders (and the law of the land*); if you don't like the new paint job on the product, maybe the product isn't being aimed at you?
You don't like the service, you go elsewhere - oh, look, he did exactly that. Problem solved. Anthropic lost his subscription - and his 7 friends - but in doing so, maybe they gained 80 corporate customers who are quite happy their marketing team are being blocked from spending their time discussing Broadway musicals.
* go careful there, if you raise the biggest objection, how these companies sourced the materials to build the LLMs, if you win you'll get 'em shut down (if not by order, the cost of continuing will be too high) and then lose all access, filtered or not. El Reg readers may applaud that result, but this guy wants to use Anthropic's stuff.
...... Remotely Exploitable Existential Threat Vulnerabilities and Deadly Certainties.
The oxymoronic useful fool is a just as a ignorant puppet amongst arrogant muppets spouting the nonsense that has one believing and supporting the subversive perversion and corrupting conversations that pronounce democracy and law are responsible and accountable for the provision of freedom and justice whenever they so very clearly aren’t.
I see nobody is talking about the irony of going public off the back of the wave of good will. The wave of good will they got by showing some kind of moral decency that, would the company have been owned by shareholders at the time, would never have happened. As it has been shown repeatedly throughout history, shareholders don't give a flying fuck about morals.
The blaggards, obviously cheating by being girly swots and clever-clogs
I think it's time for us to deploy our ultimate weapon.
Teams of management consultants will be parachuted into China, followed by a beach assault from His Imperial Presidency's 1st Private Equity division.
Soon their industries will be in ruins
Claude also ignores preference rules designed to limit wasted tokens.
And claude will waste tokens on crap that has nothing to do with what you ask and/or will make changes that you never ask for and are absolutely wrong.
Then waste more tokens reverting. Also makes things up. Always double check it.
Not only that, when you see it 'going off the rails' and hit stop, it will keep going and going and going to finish up wasting tokens, over and over and over again.
Then when you add specific rules not to do that in each and every prompt.. and hit stop and call it out for ignoring rules, it prints an apology, then continues to ignores those rules and wastes tokens in the exact same way.
So they make claude to waste tokens, prevent claude from following user rules to prevent token wasting, then implement token limiting procedures?