You'll need a drone...
... To even catch a glimpse of the horse which bolted a long time ago.
The only Chinese still wanting to buy USian GPUs are hard-core gamers.
Chinese web giant Tencent doesn’t mind if Washington doesn’t let it buy more American GPUs, because it already has all the chips it needs. During the company’s Q2 earnings call a financial analyst asked if the USA’s recent decision to allow Nvidia and AMD to resume GPU sales to China will impact Tencent. Company president …
Hardware wise the chinese are doing well. The bottleneck is on the software.
If people knew what to run they would use fpgas or asics like for mining but gpus are used not because they are the fastest tool but because they are easy to program and cuda was made after brookgpu and research environments with cheap students and stuck because it was well documented and easy. Now there is a lot of legacy stuff that nobody knows what it does but it works on gpus with cuda.
Huawei and AMD will sell gpus like candies if they make better documentation. The gpu programing space is outrageously segmented because there are many political interests in boicoting standards after so many years.
zluda is proof that the bottleneck is not technical.
Fortunately for our continued world domination.
While it's trivial to setup and operate the 2nm fabs needed for cutting edge GPU hardware and build the lithography machines to build those fabs - there is no way that a country with 1Bn people and universities turning out 5M STEM graduates / year will be able to compete with the home of ITT and University of Phoenix.
We are so confident in our unsurpassed leadership in science education we can afford to crack down on pointless degree-mills like Harvard, UCLA and Columbia.
Not sure whether you are talking about a CCP hardware bottleneck or a hardware bottleneck preventing widespread profitable LLM usage that matches the hype anywhere in the world.
I would say that hardware is definitely the bottleneck, and only exponential increases in hardware energy efficiency will allow for linear progress in AI/LLMs software.
Historically, as silicon progressed with Moores Law, there was linear progress in software, taking advantage of the opportunity to be less efficient and more generalized - e.g., machine language to compiled languages to interpreted languages. Ever more complex frameworks, etc.
I think only hardware breakthroughs that allow restarting Moores law will enable AI/LLMs to profitably progress beyond marginal gains. That hardware might be optical computing and/or analog chips based on thermodynamics, or something else. I would guess not quantum computing.
>there was linear progress in software
I don't think this is true for AI. Half a century of hoping that you could recreate a brain in LISP together with a few parlour tricks like Eliza
Then along comes transformers and 5years later we are talking about no longer needing any junior programmers/lawyers/bankers jobs
Transformers were not possible 15 years ago because the hardware wasn't there. That's why Support Vector Machines were considered state of the art at the time. There were many iterations of GPU-hard/firmware + NN-software on the way to transformers.
talking about no longer needing any junior programmers/lawyers/bankers jobs - that's the hype anyway. Re junior programmers, sometimes it takes a decade to realize the mistake made 10 years earlier, when you thought record profits meant there was no longer any need to nurture new talent. (c.f., Boeing, Intel).