Professors are trying to steal knowledge from freshman students? Yeah, right.
How AI could eat itself: Competitors can probe models to steal their secrets and clone them
Two of the world's biggest AI companies, Google and OpenAI, both warned this week that competitors including China's DeepSeek are probing their models to steal the underlying reasoning, and then copy these capabilities in their own AI systems. "This is coming from threat actors throughout the globe," Google Threat Intelligence …
COMMENTS
-
-
Tuesday 17th February 2026 12:47 GMT Elongated Muskrat
You joke, but it's not exactly uncommon practice for university professors to claim the credit for work done by their postgraduates and postdoctorates, especially when it comes to publishing academic papers. This counts double if the person who did the actual work was female. It's a genuine problem in academia.
-
-
Saturday 14th February 2026 13:44 GMT thames
Re: Pot, kettle, all black
The entire Internet is having to implement measures to try to stop the American AI companies from stealing all their stuff and grinding their servers into the dust in the process, but that's OK.
But if someone does the same to them, then that is bad, really, really bad and it must be stopped.
I have zero sympathy for them.
-
Saturday 14th February 2026 21:36 GMT Rikki Tikki
Re: Pot, kettle, all black
Upvoted, but a reminder that IP theft used to be the "American Way".
The 18th and 19th centuries saw large scale use of European industrial technologies to develop the US economy. Now, they don't like it when other countries do the same to them.
So , totally agree with the "pot, kettle" title.
-
Sunday 15th February 2026 00:11 GMT Long John Silver
Re: Pot, kettle, all black
Yes, but one cannot 'steal ideas' in the Biblical sense of nicking oxen, asses, and wives. An idea and its potential for application, enlightenment, or amusement, is not diminished by wide usage; putting it as William James might have said - "The 'cash-value' of an idea bears no relationship to scarcity." Conversely, the ubiquity of an idea is no guarantor of its validity.
What can cause damage is wrongfully claiming to be the originator of an idea. It's not just a matter of kudos. It bears upon recognition and reputation, each of which aid gaining patronage to support more originality.
-
Sunday 15th February 2026 04:22 GMT Anonymous Coward
Re: Pot, kettle, all black
Agreed ... with the consideration that bad ideas (eg. false news) can spread faster than good ones (eg. truth), and that the genAI babble mills promoted these days can right worsen that state of affairs by producing a continuous industrial source stream of such loony-frosted fruitcake notions.
Wide propagation and usage of good ideas is a net positive. Damage can come from both wrongful attribution claims and wanton spreading of complete nonsense (outside of artistry), imho.
-
-
-
-
-
Sunday 15th February 2026 09:13 GMT Bebu sa Ware
"Pirates stole my pirate ship"
I recall Captain Pugwash sufferred this indignity on several occasions but I don't recall the British Navy's assistance was ever requested. In the few episodes where the British West Indies Administration and Navy appeared they were more bumbling than Pugwash himself (and his crew.)
-
Saturday 14th February 2026 12:53 GMT druck
They are stealing the stuff we stole
I thought the term frontier model was nonsense, but it's quite apt, as just like the frontier of the wild west, everything the first settlers gained was stolen from the native Americans, and then stolen again by the next wave of gun slingers.
Hopefully if all those billions invested in training models can be ripped off for peanuts, the AI bubble will burst even faster.
-
-
Saturday 14th February 2026 19:35 GMT paluster
Being serious for a moment
Tempting though it is to laugh when the AI children start crying about how mean people are being...a serious question about the fear mongering from the bloke at Google.
If companies, especially in the financial sector, are traing models on "internal, sensitive data" why would they deploy them where a hostile distillation attack was possible? Are US companies really thay stupid?
-
Saturday 14th February 2026 20:32 GMT Like a badger
Re: Being serious for a moment
Are US companies really thay stupid?
Based on the behaviour we observe there can only be one response. In their haste to seize the moment, all possible and potential risks have been overlooked. Sometimes it's because the board (and the people it actually listens to) are happily ignorant of the various possible risks, sometimes they have simply rationalised away the risks because of FOMO. I suspect most of the time it's both.
Imagine being a competent CIO of a US company these days. You'd be AI-sceptic, willing to take certain known risks experimentally. But all of the board want the company to leap headlong into the AI future, becoming an early adopter, seizing the vast, vast rewards. If you won't do it, they'll find somebody who will. Assuming you do what's right (at the time) for your career....then if you're around when it all goes horribly, horribly wrong, whose arse will be hung out on the line?
-
Sunday 15th February 2026 03:42 GMT Anonymous Coward
Re: Being serious for a moment
It's like they're drunk, high, experiencing shared hallucinatory states of altered consciousness, that fogs whatever modicum of judgment their minds might have formerly managed to exercise, if any at all. They're engaged in a collective departure from rationality fueled by cognitive biases, emotional extremes, and social dynamics, that feeds on greed, overconfidence, herd behavior, and a huge dose of FOMO performance anxiety imho.
Nuttin' but bog standard ISO 45003 certifiable batshit crazyness however, entirely treatable by a strict regimen of approved straitjackets, associated leather restraints, anxiolytics, tranquilizers, sedatives, muscle relaxants, cyclohexanone-derived general anesthetics with analgesic and hallucinogenic properties, semi-synthetic opiate analgesics (piperidine or not), and what have you, including the likes of first-class composable delimited continuations ... plenty of choices for the 'temporarily' indisposed!
On the other hand, there's too few enforceable patents on booze so forget that, and until we can offer a subscription model for ElectroConvulsive Therapy as-a-Service (ECTaaS) that too (and lobotomy) remains out of favor. Like it or not, this Lite-Brite AI (so-called) future (even more so-called) can't be played without a full set of approved proprietary pegs (whatever that means!)! </wut?!> ;););)
-
Sunday 15th February 2026 09:48 GMT Bebu sa Ware
"subscription model for ElectroConvulsive Therapy as-a-Service (ECTaaS)"
Zuckerberg could reengineer his wildly successful Metaverse artificial reality headwear for ECTaaS.
Should be a big hit with the majority of the users of his social media platform who are clearly in need of a factory reset in the head department although a prefrontal leucotomy might be preferable but the required robotic tech for that might be more in line with Space Karen's portfollo and social media users - a merger of xAI, Neuralink and SpaceX.†
† Uncomfortably close to John Lumic's Cybus Industries' surgical human "upgrade" robots.
-
-
-
-
Saturday 14th February 2026 21:24 GMT TaabuTheCat
Parasites
When you consider the end goal of all these parasites - Google, Microsoft, Anthropic, etc. - is to put all knowledge behind a paywall, then the "theft" being reported needs to continue full steam ahead.
Call me cynical, but I can easily see a future when there are no search engines, no open web, only "Ticketmaster/Venue" exclusive deals where anyone creating new knowledge cuts a deal with their AI partner to sell them what would been openly available on their website. And if you want that information, you'll have to pay one of these fucking parasite middlemen to get it.
-
Saturday 14th February 2026 21:28 GMT vogon00
So what?
Don't know how anyone can act surprised about this. If stuff is published openly on the web, you can expect it to be used....which has been the situation ever since Berners-Lee's brainchild became commercialised (the same is true for the other ways info transits the internet, but you get the idea).
What this info gets used for is the interesting bit, and depends on your POV. Use can be for good or for bad, but who decides on the goodness or badness of any use case? Back in the day, for example, Napster were exploiters to some, heroes to others and disruptors to all.
Anything as 'hot' as AI will always have people looking to use it to get an 'edge' somehow, either for sales or to improve your own product or service somehow. I don't see how the AI companies can be surprised, as their own scraping activities have shown that any info published - irrespective of copyright etc - is fair game.
Still, you have to either admire their brass neck or wonder how stupid they are!
-
Sunday 15th February 2026 06:10 GMT T. F. M. Reader
Baffled
If I understand the article correctly, the "threat" of "distillation" is that I can ask Gemini or ChatGPT or Claude something - maybe many somethings - and infer enough information from the answers about the model's secret sauce[*] that can be then used in my own LLM to make it competitive with the original. This does not tell me what the hell it is all about, really. Can someone, El Reg included, provide an illustrative example or two of what such "secret sauce" may be and how can useful details be uncovered though normal use? Alternatively, I'd appreciate an explanation of why my understanding is incorrect.
[*] "Distilling the secret sauce" does make some sense when one thinks of a distillery, I admit.
-
Sunday 15th February 2026 08:38 GMT sarusa
Re: Baffled
There's no specific secret sauce, just money and the model weights that produced.
Basically, the Big Evil American Corporations have spent hundreds of billions of dollars stealing zetabytes of data and training their models with it. This is a very laborious process which takes hundreds of millions of dollars, shiteloads of GPUs, shiteloads of power, running for months for a single run. They map out how all the tokens relate to each other in thousands dimensional space, like 'car' appears pretty much in the same location in this space as 'automobile'. 'Kiwi' is a long ways from those but is close to both 'fruit' and 'New Zealand'. But then 'Ford' is close to both 'car' and 'unreliable piece of shite' (I know, I know, that's not actually a single token). Then they compress this a lot, like saying okay you can treat 'automobile' and 'car' as the same thing, because you can't distribute or efficiently run something that needs trillions of numbers. What comes out at the end, after months of processing and hundreds of millions of dollars, is their model, like ChatGPT 4.1.
Well, then DeepSeek comes along and just queries the shite out of ChatGPT from tens of thousands of separate PCs, pretending to be different actual people, but the goal is basically 'what are your final weights?' . And they were able to do this for $300K-6M (they have been deliberately cagey, because they pretended they weren't just stealing from other models, but the $6M is probably inflated to make 'we weren't stealing!' seem less likely).
Honestly, I'm fine with this. 'Oh noes, you stole all the stuff I stole from the peasants, you [racial slur] miscreants!' Well Boo Effing Hoo. You wouldn't have DeepSeek if the US companies hadn't spent billions of dollars stealing that stuff first. But then it's absolutely fine for someone else to liberate that and distribute it.
-
-
Sunday 15th February 2026 09:01 GMT Bebu sa Ware
"American-led, democratic AI."
"Him funny." Vintage Alltman I suspect. "Loose lips sink mips."
If it is cheaper to interrogate competitors' models than to train etc your own then, perhaps those competitors aren't aren't charging an economic fee for the service and are effectively "dumping" (in the WTO/GATT sense) their AI into the market to discourage new entrants. (Anti·competitive measure.)
Democratic is an odd word to apply to AI — presumably meant popular or accessible.
Anyway it would be "republican AI" surely ? And American .45 lead ?
-
Sunday 15th February 2026 21:14 GMT DS999
So AI people think stealing is fine
So long as it is stealing copyrighted data in Library of Congress sized units to train their models. But if anyone tries to pick and poke at their models to figure out how they work, that's criminal behavior and Something Needs To Be Done, like immediately!
-
Monday 16th February 2026 10:02 GMT mark l 2
It's going to take an "ecosystem security" approach to protect against distillation, and this will require some US government assistance, OpenAI says. "It is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider," according to the memo.
I'm not sure exactly what OpenAi and Google think the government can do to stop other companies probing their models if they can't stop them at a technical level? Or are they seeking more threats of tariffs from Trump against China if the Chinese gov doesn't stop Deepseek and others from stealing the content that American companies have worked hard to steal that data for themselves.