back to article DeepSeek or DeepFake? Our vultures circle China's hottest AI

There's really only one topic for the Kettle this week. DeepSeek. What began as a Chinese hedge-fund venture has blown away nearly a trillion dollars in stock market value from Nvidia, Microsoft, and Meta. But are DeepSeek's freely available V3 and just-released R1 LLMs all they are cracked up to be? We have our doubts, as we …

  1. Inventor of the Marmite Laser Silver badge

    Transcript please. I really can't be bothered with podcasts etc.

    1. Blue Shirt Guy

      This. I'm reading El Reg because I either can't (no headphones) or don't want to listen to audio. If posting this as a text story then it needs to be summarised in text to avoid wasting our time. Otherwise just put it on a Youtube or similar channel, or at the very least away from the written stories in a completely separate part of the site.

      1. Anonymous Coward
        Anonymous Coward

        I'm fully behind this slightly grumpy request for a transcript. Often you can get the transcript text directly from YouTube but not in this case.

        https://www.youtube.com/watch?v=pbPSVjhmtGU

        I tried three free online transcription services and all three did a decent job of creating a transcript.

        1. This one I preferred because there were no timestamps, just the text and no requirement to login to the website. It also created the transcript very quickly.

        https://youtubetotranscript.com/transcript?v=pbPSVjhmtGU&current_language_code=en

        2. With this one you can read it on the website, inside an accordion control with timestamps. Not that great. You have to login to download the text but you can then download it without timestamps. Fortunately bugmenot.com has a working account for this website, so you can download the transcript without giving up too much privacy:

        https://notegpt.io/youtube-transcript-generator?id=6f941d2f

        3. This one is the worst to read, for me, as the timestamps are in the way. It does allow you to click sections of text in the transcript to jump to that time in the video, which I can see would be a useful feature for some people, or some videos.

        https://tactiq.io/tools/run/youtube_transcript?yt=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpbPSVjhmtGU

        1. Snake Silver badge
          Thumb Up

          transcript

          Wonderful! Thank you!

          Just like DPReview, posting a video without a transcript is simply a lazy copout.

        2. Anonymous Coward
          Anonymous Coward

          That's brilliant.

          Now we just need text-to-speech on transcript output so I don't have to read it.

      2. Anonymous Coward
        Anonymous Coward

        yt-dlp

        can extract transcripts #justsayin

        Getting a transcript from a youtube video is really quite trivial, and often what I do when I can't watch a video.

    2. The man with a spanner

      Isn't the point of this AI malarky that you can make your own transcripts on the fly?

      Incidently multiplying the effort as everyone repeats the same task.

    3. Anonymous Coward
      Anonymous Coward

      easy solution

      Just point an AI tool at the video to transcribe for you. Uh oh, pointing AI to a story about AI is probably like googling google, and may destroy AI as we know it.

      Wait... I thought there was a potential downside here, but I seem to have lost the thread.

      1. Anonymous Coward
        Anonymous Coward

        Re: easy solution

        Wait! What? An actual use for AI?

  2. herberts ghost

    Anthropomorphizing AI

    This video gives us an important observation. We are attributing human characteristics to AI. This is probably not wise. There is some amount of Dunning-Kruger cognitive bias is in all of us. We have to attempt to avoid anthropomorphizing AI and really be on the lookout for where and WHY it fails.

    I wonder what Edsger Dijkstra would say about AI.

    1. Anonymous Coward
      Anonymous Coward

      Re: Anthropomorphizing AI

      “The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better.”

      - Edsger Dijkstra

    2. nobody who matters Silver badge

      Re: Anthropomorphizing AI

      <......."We are attributing human characteristics to AI. This is probably not wise"........>

      Probably because it <isn't> AI ;)

    3. Anonymous Coward
      Anonymous Coward

      Re: Anthropomorphizing AI

      It really is simple.

      "If it can't lie;

      It ain't AI"

      Even the first accepted hurdle - the Turing test (which if it's dismissed it's because people don't grasp it) *requires* the "AI" to lie Not just lie, but invent a completely coherent and credible backstory to regale the tester with.

      Every time there is some sort of supposed high level of debate around "AI", making that point tends to create a point where people have to talk about something else. Vaguely reminiscent of the "what colour should the wheel be ?" from HHGTTG.

    4. HuBo Silver badge
      Gimp

      Re: Anthropomorphizing AI

      Yeah, Liam (@ ∞-wisdom) linked to Dijkstra's (1975) "How do we tell truths that might hurt?" in his recent TFA on BASIC. Two of his pronouncements seized my ADD straight up:

      "The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity."

      "Projects promoting programming in "natural language" are intrinsically doomed to fail."

      Oh Edsger, you spanked like the best of 'em!

    5. Nifty

      Re: Anthropomorphizing AI

      "We are attributing human characteristics to AI. This is probably not wise"

      To which you could say that nature in its efficient way has made the human brain a machine-like thing anyway. So the parallels made between 'AI' and human behaviour may be fair points after all. Are humans prone to overrate themselves?

    6. Anonymous Coward
      Anonymous Coward

      Re: Anthropomorphizing AI

      "We have to attempt to avoid anthropomorphizing AI"

      Whilst I totally agree, unfortunately we can't because many boneheads benchmark their interpretation of an AI being smart against human intelligence, because human intelligence is all they understand...because they are human and experience it every day...they see the creation of AI as something that has to be "human like" first before it can be deemed intelligent, you see this at play even on this so called forum..."yeah but its still not better than a human"...but that's what intelligence is to them, humans. It's not their fault though, because intelligence is a very difficult thing to quantify for a lot of people. Yes, humans are intelligent but we aren't really intelligence in it's greatest or purest form...nor are we the only examples of intelligence.

      Dijkstra is an interesting example, because as smart as he was, his algorithms are shit compared to naturally occurring intelligence that does the same thing. There are forms of intelligence that exist that are far beyond human intelligence (and indeed human understanding to a certain degree) but an average person won't deem it to be a form of intelligence. For example, the ability of bees to find the shortest route around a field of flowers on the fly (as it were), bees make Dijkstras algorithms look crap..."yeah, but they can't build rockets and get to the moon or produce artistic masterpieces"...well yeah, but even though we can do that stuff, we still haven't come up with an algorithm that finds the shortest route somewhere that doesn't take ages and tons of compute to figure out. That's just one example, but there are plenty of animals out there that can do things better (and crucially faster) than we can with their natural intelligence. Rats, corvids, dolphins etc etc...one thing that AI is really good at that humans are quite slow at is pattern recognition and trend prediction...but those are really fucking boring and if you present them to the man on the street, they won't be blown away by it...we've had solid AI technology for this sort of thing for a long time and it never really interested anyone...enter ChatGPT that can have human like conversations and everyone loses their minds!

      I personally think that a solid benchmark for AI is when it starts to solve mathematical problems that we just can't and starts filling in some gaps...without bruteforce maths...there are a lot of mathematical formulas we use all the time that are "just fine" but they aren't very precise (in a mathematical sense) when we start to see improvements in precision in areas of mathematics, that's when we start to see AI dragging us forward at break neck speed...when you have mathematical precision, you save bucket loads of time testing and perfecting and therefore tons of money and resources as well...while we focus on LLMs and other "simulated human like" intelligence progress in that direction will continue to be relatively slow...but we will get there.

      The scary part of this, which is what a lot of the experts are trying to get at, is that once we've built it, there is a good chance we won't understand it...if we can't figure out bees and better them at route finding, then we stand little chance understanding an AI that can produce algorithms with insane levels of precision beyond our capabilities...it's not that AI might kill us, it's that we probably won't comprehend or understand it and it may not acknowledge us at all...killing us unintentionally...when was the last time you walked across your lawn and thought about the bugs in the grass you might be treading on?

      There's a good chance that once we crack AI, we become totally irrelevant to it and we can't stop it because we can understand how or why it thinks the things it does and we don't understand what it is doing.

      I think this is unlikely, but the only way it becomes unlikely is if we start thinking about intelligence rationally and for what it is, rather than comparing one kind of intelligence to another...there may be things that LLMs can do that we simply can't understand because the only methods we have to test them are comparing them to ourselves...making them solve human-like problems...we just don't know...or at least, I don't...and I've spent quite a few years developing and training various different kinds of AI.

    7. Anonymous Coward
      Anonymous Coward

      Re: Anthropomorphizing AI

      The problem is that certain areas of the hype bubble are attempting to push the envelope even further:

      https://www.theguardian.com/technology/2025/feb/03/ai-systems-could-be-caused-to-suffer-if-consciousness-achieved-says-research

      As pretty much every engineer here surely can see, the proposition that we will accidentally create consciousness is utterly insane, and leads us down dangerous alleys.

      In order to make a machine that does more X, you need to be able measure how much X your current iteration has, and measure the amount of X in your subsequent refinements. Natural selection is not selecting for any particular property per-se, and so can happen upon consciousness, particularly over billions of years, if it confers some advantage.

      Current approaches to measure for consciousness used to define rights for animals can easily be applied to machines, since they're entirely behavioural. However the reason these inferences are valid in the animal case is simply that as animals that are conscious, we can reasonably imagine that other animals posses this feature too.

      The fallacy of "If I make it more complex it magically starts to become self aware and able to experience pain" is terrifying to be hearing from fields that really, _really_ ought to know better.

      When I gun down a crowd of NPCs in a video game and they start screaming and running away, at what point do I get charged for murder? Has everyone gone insane?

      1. Anonymous Coward
        Anonymous Coward

        Re: Anthropomorphizing AI

        It's a two way street though, linear algebra and calculus can certainly cause people to experience pain in the form of headaches and intense confusion...

    8. TheBruce

      Re: Anthropomorphizing AI

      I would claim that 90% of all replies on The Reg are good indications of D-K cognitive bias (including this one).

      1. Anonymous Coward
        Anonymous Coward

        Re: Anthropomorphizing AI

        I would tend to agree...but only in so much as people posting stuff with their name on it are at the start of the curve and the ACs are at the end of it.

  3. ecofeco Silver badge

    Aren't they all deepfakes?

    Or rather, they all fall short in some way.

    Deepseek's biggest draw right now is price. And open source. Which means whatever shortcomings it has, everyone is free to try and fix them and improve them.

    1. teknopaul

      Re: Aren't they all deepfakes?

      Deepseek is not just a LLM. This is the point. It does some attempt at reasoning. US offering s arejust LLMs that sounds well spoken but happily recommends one cig per day.

      I suspect that some people in the US like the fact that LLMs repeat the most common lie.

      Chinese option arriving that at least attempts to reason, threatens their hemegony on the sort of "truth" that requires doublethink to believe.

      <conspiracy />

      1. Anonymous Coward
        Anonymous Coward

        Re: Aren't they all deepfakes?

        Yeah, the US has really fucked itself with it's "moats", "returns" and property protection. What China has done is release something in what was supposed to be the spirit of OpenAI...it doesn't matter that it might not be as good, because really no LLMs are perfect right now...what matters is that now, any nation or organisation with a relatively small budget can now take what Deepseek has released and run with it, improve it, reduce the costs etc etc...which is great for us as consumers, because competition is always a good thing...but it sucks for the US because they hate competition...on the flipside though, the US lawyers will love it, because it's potentially decades of long drawn out lawsuits.

        I've put £50 on OpenAI only existing as a patent troll inside another patent troll organisation within 10-15 years.

  4. John Smith 19 Gold badge
    Thumb Up

    Impressive *multi-level* optimisation effort.

    It's said that premature optimisation is the root of all evil (Knuth?) but LLM have been around for some time.

    Dedicating a small number of cores to handle inter-chip IO. Very neat.

    The question of course is how much of their source data been pre-processed by other modules and systems not discussed.

    Because if you were a hedge fund that you knew you could chop >10% of the stock price off a stock and that they would regain that price when the work was discredited you set the stage for a little manoeuvre that I like to think of as "upside-down pump-and-dump."

    Of course if they really have done most of the heavy lifting on just this hardware then the AI "Industry" is in for the sort of shakeup that the makers of Zantac experienced when it turned out most ulcers are caused by the body reacting to H.Pylori infection and that it could be cured with a simple (and cheap) cocktail of ingredients and their product was basically redundant.

    Either way well played DeepSeek.

    1. Anonymous Coward
      Anonymous Coward

      Re: Impressive *multi-level* optimisation effort.

      >Either way well played DeepSeek.

      We are truly in the golden age of Schadenfreude.

  5. Anonymous Coward
    Anonymous Coward

    You put the best crap in

    You pull the best crap out

    You con a lot of punters

    And you turn about (and run)

    1. Alumoi Silver badge

      Re: You put the best crap in

      Ode to Windows?

  6. andy the pessimist

    Thank you for a decent explanation of the deepseek improvements.

    Using clever processor partitioning to improve performance by a ratio of 3-4 is good.

    Can the rest of the improvements be replicated?

    It keeps openai on their toes and prevent a monolithic ai process.

    The conversational explanation makes it better.

    1. HuBo Silver badge
      Windows

      The TNP piece (linked under "DeepSeek here") goes through quite a bit of detail of the tech involved, and also links to the 53-page tech report that further explains it all (under "the architectural details of its DeepSeek-V3 foundation model"). The DualPipe algorithm sounds interesting but also the fact that they're training the hefty model in mixed precision, and particularly FP8 (as Tim explains).

      Looking at Figure 6 of the "architecture details" paper (p. 15) one sees that they run the MMAs (Matrix Multiply-Accumulate, represented by circles with an X in them iiuc) at FP8, which surely speeds things up nicely compared to FP32*FP32 that may be more common in training to avoid stalls, crashes, and corruption, especially with the more rotund customers. The savory secret sauce seems to be to keep weight gradients and master weights at FP32 on the menu, as they are updated through epochs of training, and downconvert them to FP8 only where feeding the MMAs.

      This is why they recommend (per TNP) that it would be great if the busboys and poets in tensor units could downconvert and feed MMAs straight from the HBM kitchen, in one shot, rather than have those be consecutive ops, like: gather, downconvert, scatter, re-gather-&-feed, which involves excessive moving about! (imho)

  7. Dave559

    Meh

    "blown away nearly a trillion dollars in stock market value from Nvidia, Microsoft, and Meta."

    …and nothing of value was lost.

    (And add me to the votes for: "If it wasn't worth putting in writing, then it certainly isn't worth us wasting the far longer time it takes to listen to someone waffle on about it in audio".)

    1. ComputerSays_noAbsolutelyNo Silver badge

      Re: Meh

      The stock market reaction goes to show the irrationality of Investors and "The Market".

      Maybe, we should move all the economics textbooks, which are based on the theory of the Homo Oeconomicus, the rational human actor in the markets, from economics to science fiction.

      1. Anonymous Coward
        Anonymous Coward

        Re: Meh

        @ComputerSays_noAbsolutelyNo

        You mention "irrationality"..........................

        And you wonder about "irrationality of investors"..............

        Have you noticed various politicians? Say....George Osborne, Boris, Liz Truss, The Donald......................

        Maybe your definition of "irrationality" is a bit different from mine...................

      2. Anonymous Coward
        Anonymous Coward

        Re: Meh

        It shows me how much people want to invest in the tech sector, but just how boring it has become in the West.

    2. Shuki26

      Re: Meh

      Definitely. 'Losing billions in the stock market' is just too dramatic, alarmist and shallow. Better to say something like, 'went back to Nov 2024 price'.

  8. I Am Spartacus
    Flame

    "The biggest short in history"

    Bet the SEC are looking at who shorted nVidia last Monday morning.

    1. streaky
      Black Helicopters

      Re: "The biggest short in history"

      Pretending the SEC ever looks at the tape to figure out who did what.

      Not on the cards.

      If they do, they won't get to it until 2045 - and that's not a hyperbolic guestimate of when. If they do it'll be a speeding fine that's just a fraction of what they made.

      The good news is hedge funds and pension funds lost a lot of money last week and retail investors bought the dip and made a lot of money when the market realised it was being dumb.

      There are actual reasons - Nvidia is a half trillion dollar company tops, accounting for potential future growth, not the > 3 trillion company the market thinks it is.

  9. Anonymous Coward
    Anonymous Coward

    Doing Something???

    @Iain_Thomson

    So now there is "new legislation" to "prevent" AI from being used for "images of child abuse"...........

    Ha........ignorant elected folk in Westminster "doing something"...........................

    (1) Is there any budget for enforcement?

    (2) Is there ANY possibility at all of enforcement?

    ....Yup...."No" on both counts.............................

    But, of course, the elected folk in SW1 are "doing something"...............................

    ......and, of course, that makes us all really happy.........someone, somewhere is "doing something"............

    1. seldom

      Re: Doing Something???

      It simply means that if you have device capable of creating "images of child abuse" (all computers, phones, tablets) and the government doesn't like you, you will have to prove your innocence.

      Ask the postmasters how well that works.

      1. JulieM Silver badge

        Re: Doing Something???

        According to UK law (passed since the home media release of the film), the introductory sequence to The Simpsons' Movie constitutes "images of child abuse" -- even although it is animated in a non-photo-realistic style, and there is neither any reason to suppose any real child was ever abused in the making of it, nor any chance of it being mistaken for a true and accurate representation of reality.

        I'm not linking to it, for obvious reasons; but the unexpurgated version is available online.

  10. Grunchy Silver badge

    Ehhh it’s ok

    I made a LXC on my PVE (Linux container on Proxmox) and fired up a 70GB DeepSeek, I guess it’s ok. I still never got my cheap nVidia P4s virtualized so I ran it purely on CPUs.

    Computerphile made a good video about DeepSeek which inspired me to get it running.

    https://youtu.be/gY4Z-9QlZ64

    What good is it? It’s an interesting spectacle (actually, it’s damn neat!)

    The output is definitely recognizable as pure AI, so it’s not all that compelling. It’s the technology that’s fascinating; the product itself is disposable.

  11. This post has been deleted by its author

  12. pink_unicorn

    H800 illegal?

    H800s are definitely not illegal in China.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like