Weasel words
"Managed to block", indeed. You make it sound as though the USA gently asked. The USA coerced ASML.
And Canada making this report. Ha bloody ha. What chipsets does Canada make?
If Canada had any balls it would stand up to the USA.
Despite concerted efforts by China to bolster domestic semiconductor production in defiance of US trade policy, new evidence uncovered by Canadian research outlet TechInsights suggests SMIC, the Middle Kingdom's top chip manufacturer, remains generations behind the rest of the world. In a research note published Monday, …
I'm not sure what you're getting at, since the entire point is that TSMC, Samsung, and Intel had 7 nm earlier and now have newer methods, though of course they still use 7 nm. Also, it wasn't comparing all of Asia, but China, and specifically SMIC, to arrive at the "generations behind" claim. TSMC usually has had the most advanced manufacturing for the last few years, so that they're further along than Intel is not really a surprise.
We could also debate the relevance of that, because many cases don't mind that the chips are built on 7 nm tech. Still, if you need lower for some reason, TSMC can do it and SMIC can't yet, hence they are actually behind. Which itself is also not a surprise because until recently, China was buying many of their advanced chips from TSMC, so we've only recently seen the results of throwing a lot into getting that capacity.
Really? Carney is still 'hoping for a deal' as he wrings his hands. He'll still be hoping for a deal as the maple flag is pulled down.
Fatman Ford, 10th of June, puts tariffs on electricity exports, says will not back down.
Fatman Ford, 11th of June folds like the cheap suit he wears, and suspends them. What did Fatman Ford get? Compliments from Taco man who called him 'strong'.
No, Canada has not been telling the USA to fuck off.
Then you cite two years.
Looks to me like China is doing fine for a country that Trump has decided to block at every turn.
So they're two years behind ? They're catching up is the point. Plus China doesn't need the US market (or Europe, for that matter). China has plenty to do at home, and time to get it done.
That is one advantage of government the Chinese way : it doesn't change direction every four to eight years.
The 2 year old process almost certainly means worse power consumption for the same performance than current bleeding edge chips, but the important question is how good are the chips and doing their job, not what process was used to make them.
Also, at home I use a *7* year old laptop, which struggles a bit running Blender, but is otherwise good for anything I want to do - in a few years time when my current laptop breaks, I'll move to one with a more recent processor. (Perhaps one made 2 years ago in 2023...)
I am all for progress. But the fact is most people do not need the very latest chips. Can you type quicker with today's chips compared with those from 5 years ago?
Sure, newer chips can do more. More useless things as well as more useful things. Apple Intelligence on the Mac is not available on older Intel chips. Given the absolute crap show that Apple Intelligence is, do you need a faster chip?
Quite. A *lot* of demand for the latest and greatest is due to reviewer / "influencer" hype, the absolute BLIND focus by braindead reviewers that decide that if your latest phone doesn't run the latest game at the highest FPS available, it's a piece of garbage and needs to to utterly ignored.
And I'm SICK of it. I don't play games and I couldn't give a rat's rear end if the phone plays your stupid games at 2FPS better than Brand Y. Same with laptops and desktops - the main benchmarks are game tests. Couldn't give a crap. I'm looking at productivity, both phone and laptop, which is why I'm using a phone that reviewers pretty much absolutely hated (only a midrange chip! On a flagship phone, OMG!!) but we long-term simply love.
It's the same with So. Many. Industries. Single-minded reviewers who feel they get to decide trends.
Don't get me started on the stupidity of ADV motorcycles for the average urban rider... >:/
Like I said, don't get me started >:/
Like the 2010's media hype (here in America, at least), that you "Need a literbike because they are easier to ride!" (see searches like https://duckduckgo.com/?t=ffab&q=literbike+easier+to+ride&ia=web). A beginner or intermediate rider "needs" a liter [sport]bike to kill themselves faster, thankyouverymuch.
And for the last, say, 8-10 years, thanks to the R-GS, *everyone* needs an ADV!! Yes!! Even though here, in America, the CDC's average height for males factoring all generations is 5 foot-8, factoring age 18-40 it is 5 foot-9.5, and that the average median motorcyclist age in America is 50...they are pushing motorcycles with 34-inch+ seat heights. Don't you NEED a 500+ pound motorcycle that comes up to chest level? How about the media's DARLING, the KTM 1290 Adventure R - 540 pounds, 160HP, 33 to 34-inch seat, no reliability, manufacturer teetering on bankruptcy as engines blow up underneath the poor suckers convinced to buy one. Or a BMW R-GS Adventure - don't you want a 34.3 to 35-inch seat on a 593 pound "adventure" bike when you're doing off-road??
It's the most moronic things I think I've ever seen. Buy a bike way out of your league, get told you're doing the right thing. But at least they get to sell you $24,000+ bikes (and trust me, I should know, I'm riding a damn '14 Valkyrie because, when I went new bike shopping, I wanted a lightweight touring bike. Only BMW, with the F800GT, made one at the time. All the industry wants to do is sell up, more cost, more weight, moar powa! Don't you want a monster in your garage??)
Our daughter's father in law is a sucker for this sort of thing -- he likes "the latest". His current ride is a K1600, that pricey but rather nice six cylinder BMW sport tourer. It goes with his other (pricey) toys. I'm a little older than him, I think, and may have a better developed survival instinct because I aged out of my (heavy) Honda ST and replaced it with an inexpensive Royal Enfield twin, what I call my 'geezerbike'. It complements my old (71) Triumph -- another light, low and very rideable machine.
So I guess that makes me a Really Bad Consumer.
(The ST1100 is a remarkable all-rounder. Its a heavy machine, especially when fully fueled, but under way its remarkably light on its feet. Fast, too -- you get places remarkably quickly on it. Its also incredibly reliable. I kept mine for a bit over 20 years before passing it on and it was still in as good condition as it was new. Not bad for what is now quite an old design.)(The son in law pounced on it, BTW.)(Which just goes to show that if something's designed right in the first place then it just works. Even the styling never seemed to get out of date.)
Yeah, I tried an ST1100 once - beautiful bike, almost what I would want, top heavy as a big-chested opera singer :p.
Strangely, it's why I'm riding the Valk. It feels so much lighter than the scales say at rest and once you're above 3MPH it, somehow, utterly by magic loses 150 pounds. It's amazing, Goldwings in general, and a huge surprise until you ride one.
However, my life took a massive downturn and I'm not feeling the urge to travel much any more. Maybe I'll sell the Valk and get something smaller for the shorter trips I may end up taking, a Guzzi V100?
Yup, my new laptops are refurb'd T14s gen1. An i7 and a Ryzen. The older two, a T490 and T460 are fine doing what they need to.
As for lower power usage. Mostly they are plugged in, so that is not a concern, and I am with Snake on everything else. I may play an old MAME game or two, no games on the phone, and the only reason to upgrade the phone last time was a naff screen on the Samsung. Only compelling reason to change a phone now will be a better camera when I go on our next big holiday - 2yrs away.
If I was still encoding blueray rips or some intense processing then maybe. And not worrying about games as would a serious gamer use a laptop - doubt it
Broadly agree with your post, but ... "The 2 year old process almost certainly means worse power consumption for the same performance" - that has not always been the case. In fact there have been instances where older (tuned) processes are used precisely because they are more power efficient than the bleeding edge.
Warning, unpopular opinion:
But we need to be much more intelligent regarding the concern of "power consumption" - spend the lucre to upgrade to the latest because the media influencers tell you about "power efficiency" and, like automotive fuel consumption, exactly how long will it take to recover the investment in power savings?
Years, most likely.
It's a metric that sounds good on paper but can have poor ROI in the real world, unless you're 24/7-ening it. It's like another conversation I've paid attention to, mini-split HVAC efficiency, and when asked "Do I setback or keep the same temp?" you get answers like "You shouldn't get back! The COP will go from 4-5 to maybe as low as 2!" What meaningless drivel!! COP it a meaningless metric that makes you sound so, so important and knowledgeable, but the question is if you save money at the end of a timed cycle. "COP" doesn't mean one damn thing about how much total energy is used or any savings, and talk of power efficiency on chips sounds nice but will you actually have anything to place back into your pocket after doling out for the new kit?? And what about the value of your time? Can that chip, eating more power, save you time and therefore improve your overall productivity and quality of life?
Bah, humbug! [grouchy old man mode, deactivated]
A 2 year old machine without AI crapware is probably faster than the latest one with it, and all the better for not having any. Or Recall.
Besides, 2 year old, even 10 year old machines are fine for most users. Those mug enough to subscribe for their online webware could be using 20 year old dumb terminals.
For the majority of the market, the last decade of speed increases have just been a way for corporates to waste money. Already, pre-AI or LTSC machines are more desirable, protecting individual users and companies from additional risk. MS have polluted the future of tech with AI and Recall.
"A 2 year old machine without AI crapware is probably faster than the latest one with it, and all the better for not having any."
That is irrelevant to any discussion of power efficiency unless that AI is absent from the old machine and completely un-turnoffable from the new one. Neither of those is true; a two-year-old machine will get any builtin AI functions the new one does and both of them can be disabled equally easily. If, at some future point because they haven't yet, they make it impossible to turn off something, they're not going to make it easy to turn it off, just for people with a different processor than yours.
Since we're talking about desktops or laptops, imagine their relative performances when wiped and a suitable Linux distro installed. No AI to complain about. Then, the actual power efficiency numbers can be considered. In most cases, the machine that's two years older will be just fine, and the difference will be small enough that it's not noticeable. That might be a little different if you're running ten thousand of them, but even then, having to buy new hardware to get that minor benefit is likely to mean you accept the costs of the less efficient processing rather than the costs of ten thousand new machines.
That isn't as true when the machines you compare to are a lot older. I've known people who were running 2009-era machines with a Core 2 Duo in them which were running at an average power consumption of about 120 W. They were getting about as much performance out of them as a Raspberry Pi 4, and possibly not needing all of that. A Raspberry Pi 4 can run in 10 W. So in that case, the efficiency savings would be noticeable and would cover the cost of the new hardware. For example, at the average UK power price of £0.245 / kWH, a power saving of 110 W, and a purchase price of, let's say, £60 for the Pi and some accessories I assume you don't already have, the time to recover costs is three months.
> unless that AI is absent from the old machine and completely un-turnoffable from the new one. Neither of those is true;
Turning off AI might be “easy” where “easy” means: install Linux, with Windows and Apple (and from the advertising Android is going the same way) completely turning off the AI is something they don’t want you to do…
> Besides, 2 year old, even 10 year old machines are fine for most users.
As evidenced I suspect by the number of Windows XP, Vista, 7 and 8/8.1 systems still around.
My.current specification for an office laptop capable of running Teams etc. is based on the 2020 L15 Thinkpad AMD gen 1, which Lenovo were selling for circa £800. I use this specification, as it is what I use. For really demanding stuff I switch to the desktop.
I’ve long been a fan of benchmarks, like Passmark, which give some means of comparing single thread performance - which really impacts the performance of many Windows applications, and mult thread which impacts how many applications you can have running before user perceived performance degrades.
I was in Guangzhou in 2016 where they were busy building their metro (now 17 lines and >350 stations). At the time, my Chinese host said that they had built 6 new lines in just a few years and many more were underway. He said that just couldn’t happen in the UK.
Now whether you’d like to live under China’s political system is another matter, but when you look at any recent major UK infrastructure project, it’s hard to argue against his point.
I suspect that many people will look at Chinese tech and think “It’s good enough”.
> Now whether you’d like to live under China’s political system is another matter, but when you look at any recent major UK infrastructure project, it’s hard to argue against his point.
I suspect a lack of public enquires and if you object they just build the motorway either side of your house (not sure if that is still there now). NIMBY's also probably don't stand a chance and we need to go back to the Victorian era of building the railways where they just demolished anything in the way
From the article... SMIC has been producing the 7nm node for 2 years. TSMC started producing chips on 5nm in 2020.
Reading comprehension is hard apparently but 5nm is better than 7nm and 2020 is five years ago. So SMIC are more than two years behind and not catching up.
The thing is, I haven't needed a bleeding-edge powered computer for nearly ten years! Unless I'm doing hard-core gaming, virtually any reasonably current CPU is far more than adequate, right down to the previous generation of AMD's basic 6-core processors for entry level gaming.
It is also worth remembering what the massively multicored systems used to do when CPUs were slow and kludgy (think back to the '386 and '486 days when multicore motherboards were the rage instead of multicore chips.)
But the rah-rah propaganda insists that everyone claim China is "so far behind" the world that they "can't hope to catch up."
Bullshit and balderdash. The Chinese and Japanese are notoriously good at implementing and improving on "western ideas".
Take a look at what kind of processors NASA and the military actually use in order to provide the radiation hardening required. They are not running sub-7nm processors! Yet that is the area where the brain-damaged Drumpf & co. claim that the Chinese are a "threat." (Not that I'm surprised Drumpf is a complete and utter moron in this regard, too. The man never has had a "shining intellect.")
The yields of the best chips that SMIC can produce are probably low and will be destined for Huawei's phones. No need for them in notebooks and China has no restrictions when it comes to building data centres, but it's already shown a keen interest in developing better AI models that use less power and it's happy to make them open source to put pressure on commercial competitors.
And US foreign policy is likely to encourage more cooperation in Asia as countries seek to hedge their bets.
You are right, the vast majority, of us common individuals, have no need for a CPU beyond the capabilities of email, web browsing, document editing, etc. Whatever China is slinging out currently is easily able to keep up with that. But super high speed CPUs for LLM development and high end processing - that's way behind current Chinese development.
Not having the latest and greatest means the ecosystem (software) needs to program smarter to get more features at equal or lower latency from devices.
The same thing was observed from gaming studios emerging from the former eastern bloc, because those developers and their market didn't have the latest and greatest, those engines ran on very humble hardware.
If your market is dominated by hardware performance doubling each generation, then why should software improve?
You see this in Microsoft, where the OS gets slower on each iteration, because new hardware.
Arguably Linux is bucking that trend where performance regressions are counted as true regressions, but its memory footprint is not decreasing.
See also Deepseek & co emerging when China gets cut off from NVidia chips.
years and years ago, a colleague and I developed on the fastest desktops we had - whheeeee
Out main test desktop was an old, ancient no longer used, well underspec unit. We made sure it ran fast on that before passing to users
Even now with SaaS etc. At one point at a client we tested the mobile app on a crap mobile with limited screen size and data transfer speed. We optimised where we could, we compressed where we could. It can be done, just not often. Dekstop users at that client loved it
Agree with you but if you rephrase the question as behind "The West" then the piece makes more sense. I work with a few ex Chinese engineers and leaving out the politics and way of life portions, some of the stats from China are mind boggling. 1.4 million engineering degrees awarded in 2020, and the engineers I have met are not diploma mill engineers, they have been taught and trained and then run through an apprenticeship style introduction to the workforce resulting in people who not only have the paper credentials but also real life practical chops to go with it. From the conversations I have had, the Chinese recognized over 10 years ago their engineers were book smart but real world applicability weak. Thus a pivot to ensuring the engineers are paired up with experienced people. And culturally it appears to me as an outsider the team takes responsibility rather than the PM. Of course like every scenario there are plenty of situations where cheap and nasty wins the day, corruption and backhanders are par for the course, but when they need to make a high quality repeatable widget, it is no bother to them.
For a work related assignment I was a part of a team reviewing a potential widget produced from a well known Chinese manufacturer. When it came to quality control versus price it was a very straightforward conversation, how many do you want, how fast, how good?
Now move the sliders on each of those to get to your price point. And by sliders I mean just that, they had their software running modelling simulations and we, not the manufacturer, got to play with the sliders to get the optimal ratio we wanted to pay for. And the metrics surrounding the quality were very good, as good as anything I have ever seen. And being so vertically integrated if we so desired we could have evaluation from raw materials forward to finished product. Last time I saw anything close to that level of end to end quality control it was going into space. What astonished me was it was built into their daily process, and then they turned it down to meet the customers needs/price. The only potential drawback I saw, and this really depends on your point of view, was they tended to over engineer, again I think, and this is terribly subjective, start from a position of building something that is wildly over engineered from a strength/capability perspective and then dial it back. Sandy Munro did a few youtube videos recently on the BYD pickup and spent considerable time lamenting how much money they were wasting with over the top beams, attachment points etc. As a potential purchaser I am more than happy to buy a product over engineered rather than built up to a price point.
Building advanced semiconductors might be very difficult but the same can't be said for operating systems. Anyone can make one, even in the comfort of their own home, with the limitations being merely intellectual, how much ingenuity, labor and organization you could put into the project. China's got huge reserves of labor so the only barrier to making an OS product is the build/buy tradeoff -- everyone's using 'X' so there's no demand for 'Y'. We fixed that for them by effectively barring them from Windows. As any Linux user knows you don't need Windows for day to day tasks, its always been a resource hog, but it remains widely used because of commercial inertia.
OS technology has been largely neglected for decades because of the ready availability of high performance hardware. So the three systems in wide use -- Windows, Apple/BSD and Linux/Android are really quite old architectures that have been highly refined over the years but still reflect thinking from the 60s and 70s. They're vulnerable to anyone who turns up with a more advanced design who can get it widely adopted. This can easily compensate for not having 'the latest' hardware -- cheaper, too.
And yet, no-one seems to have come up with a better way to write an OS in the last fifty years. (There would not be a problem to "get it widely adopted". We've seen several new form-factors for computing devices in 50 years and several new OSes, but the survivors always seem to end up looking like UNIX.)
Your line of reasoning is fundamentally flawed. In order to make your point (namely that Unix be the truth, the light and the only way, from eternity to eternity, Amen) you posit that any "better" (by what metric?) contender would not have had any problem to become "widely adopted", ostensibly proving that no OS design that is different from that of Unix could *ever* be viable, both technologically and commercially.
Now, as everyone who has been observing the IT market for the last four decades or so will easily see, there are really two major hurdles that *any* novel operating system that wants to be a commerical success needs to overcome: (1) legacy application compatibility and (2) hardware support. Next to the questions of "Will it run on hardware XYZ?" and "Will it run our favourite/mission critical applications?" that hardware vendors, system integrators and end users are facing alike, most other issues fade very nearly into insignificance. In particular, this is a fundamental conundrum that both traditional and conceptually novel operating system face alike: it affects BSD Unix in the same way as, say, seL4.The vendor-lock-in that is being created by the double requirement of ABI backward compatibility and device driver availability can only be broken under special, and exceedingly rare, circumstances, and these are largely independent of the technical properties of the new OS on the block. In fact, almost all new operating systems that have successfully penetrated the market in the last decades have been "exploiting" one of two classes of opportunities:
(1) A vendor already firmly established in the market uses its power to push a new developent: Windows NT, OSX.
(2) A fundamentally different class of computing device (or in your words: "a new form factor") newly appears on the market: IOS, Android.
The one exception to this general rule would appear to be GNU Linux, and it is true that its path to acceptance in the data center was a bit special indeed, its eventual success resting on the interaction of three rare circumstances:
(a) It ran on a class of computing device (to wit: commodity x86-servers) that was not fundamentally different but substantially less costly that the "tin" of the incumbent brands against which it was competing.
(b) The market it penetrated was still rather fragmented (AIX, OSF/1, HP-UX, SunOS/Solaris being the big names); it was not facing a true vendor monopoly.
(c) The applications that were usually running on the incumbent systems could be ported to GNU Linux (actually: Redhat Enterprise Linux) with little effort.
(My points (a) - (c) are corroborated by the fact that GNU Linux never managed to make a serious dent into the Windows market: it had none of those special advantages over Microsoft's offering, so Redhat was in the end not successful on this front.)
To cut a long line of argument short: The operating system market is, in this day and age, static to the point of being set in stone and it is only under very special (and rare) circumstances that a newcomer stands any chance to succeed, which seriously stifles innovation. While some technical progress is being made in the academic sphere (e.g. the performance problems of early microkernel designs have long since been overcome by the L3/L4 OS family) we're not going to see any true innovation in the commercial sphere any time soon. Unless special circumstances intervene, that is. The forthcoming/already ongoing New Cold War might present one such opportunity (I've briefly explored this idea years ago here: https://forums.theregister.com/forum/all/2018/04/23/risc_v_sel4_port/#c_3492837). And in China in particular, the trade war with the US and its satellites might well open up new opportunities. Only time will tell.
People keep thinking "POSIX compliant APIs" mean Unix and Linux. Microsoft's was the only intentionally crippled implementation of POSIX I ever dealt with.
Other rather wildly different systems like VAX-VMS and the AS400 have much better POSIX compliance than even some Unix variants, making the Portable Operating Systems Interface all the more relevant and useful.
Until we see some wildly different processor architectures, the current threads in a process model fits pretty much any OS variant to date.
New systems die almost immediately if they don't have source code compatibility with an existing trove of software, especially open source software.