Interesting
They certainly made good on their change of direction.
A former TSMC executive has described how a collaborative effort towards 450mm (18-inch) wafers for manufacturing chips was halted when the company realized it would put them in direct competition with Intel and Samsung. Chiang Shang-Yi, former co-chief operating officer of TSMC, is credited with expanding its R&D organization …
Now that TSMC is a big fish in the ocean maybe they will consider 450mm wafers again.
After all they are outcompeting everyone else out there, regardless of size.
Am also guessing if one of the big guys manage to get ASML and other toolmakers to start work on 450mm, they will have an advantage, if the others don't follow.
450mm is probably inevitable, the scaling laws just work in your favour
But now it will be TSMC forcing intel to spend $$$$ to play catch up.
What wou;d be super ironic is if Intel convince the US govt to pay them $$$$ to build 300nm "cutting edge" fabs in the USA just as TSMC opens a 450mm line
Way back in the 1990s I went to a conference where I watched a talk by Gordon Moore. One of his observations was that, if you compared the cost of a wafer-fab, plotted against the size of the wafers it used, and compared this against the total revenue generated by the fab in its working life, then a 15" fab would break even over its life, while an 18" fab would lose money - it would never recoup its construction and running costs.
Now, this was the 1990s, and 12" fabs weren't even a thing at the time, but there are still economic limits. As wafer size has increased, the number of fabs (and companies operating them) has decreased. So any company making tools to go in the fabs has seen their market decrease. The cost of the tools has gone up of course , but when you only sell a few a year, losing a single customer can jeopardise the whole company. So it becomes a risky market to do business in, and the number of suppliers has reduced, as wafer sizes have increased from 6" to 8" and (especially) to 12".
Since there are fewer suppliers, the number of people with the skills to design, build and maintain the kit decreases. Hence the tools become even more expensive. A major customer may even find themselves in the situation where they over-pay for a machine, in order to keep the supplier solvent (happened to a company I worked for in the mid-2000s - the customer told our sales guy to increase the quote as we had quoted too little).
If the world can only afford one 450mm fab, then there's not really a business.
It also depends on what chips you are making
With larger chips (ie server class CPUs) the larger wafer wastes less space around the edges - think tiling a square chip on a circular wafer. with smaller chips these edge-losses are less.
You used to make it back by putting a few smaller chips in the corners, since your process costs are basically per mm^2, but now for sub 10nm features the multi-layer process is so closely tied to the chip you are making you can't mix+match easily at the high end.
As for the business risk, I think that has all changed anyway. You have one maker of mask-steppers, one maker of light sources, one supplier of wafers and 2-3 customers for the cutting edge.
The wafers also cost more as they're more difficult to manufacture, and need more material (they have to be thicker as well as wider, to support their own weight without bowing). That thickness can become a problem, especially if it's a product where most of it has to be removed for rear access (e.g. image sensors).
The increased weight becomes an issue for mechanical handling of the wafers, plus physically limits how many can be processed in one batch (unless you're talking single wafer tools, but even then there are limits) by what the robotics can lift. Also if they need to be spun for various reasons, their weight limits the maximum speed (spin too fast and they will rip themselves apart quite spectacularly).
Fundamentally though, especially for some of the less cutting edge but still large-market applications (power devices or MEMs/sensors for examples), a lot of fabs are only recently beginning to move from 6" to 8", or from there towards 300mm (12"). 8" is something of a sweet spot for them, and in many cases the move to 12" is at least partly due to tool vendors focussing more on that size simply due to market forces from the big boys, so development on 8" toolsets can be limited.
I recall a lot of the discussions back then when all this was proposed, and how much investment and risk was involved (and who was to take the hit on both, and who was trying to offload it onto others). It wasn't a huge surprise when it all went quiet...
As background, I work for one of aforesaid tool vendors.
If that single customer is willing to pay the billions of dollars required to develop that tool companies like ASML would be happy to oblige. (ASML has a nearly completed design for 450mm, but it's based on now outdated tech so would like need extensive redesign. Or even start from scratch)
There are barely a half dozen companies that could possibly use it - the three companies mentioned in the article, plus the big DRAM and NAND manufacturers like Micron, ST Hynix and Kioxia, and whoever the big NAND/DRAM makers in China are.
The suppliers aren't just ASML, who would be fine with this because they are making EUV scanners for a tiny audience of three companies anyway.
The problem is all the vendors that are making equipment for foundries using less cutting edge processes which is also used in the wanna-be 450mm fabs. All the wafer handling, cleaning, metrology, etc. DUV equipment for lower layers, all the equipment in packaging that handles/tests/dices wafers.
Probably many of them would figure the potential customer base is so small they won't bother with it, and let someone else sink all that cash into R&D to develop equipment for 450mm wafers. I think you'd end up in the same situation as EUV, where basically the big fabs that needed EUV to progress had to fund a lot of its development.
So sure, all these 'lesser' suppliers would be happy to start manufacturing equipment able to handle 450mm wafers. As long as someone comes along and entirely funds their R&D for them so they don't have to take a risk that it doesn't happen and they're left holding the bag!
Let someone else sink the R & D cash !!
Yes… like Uncle Sam and his CHIPS act.
Kerching!!
US Automakers must be particularly pissed as the Advanced Technology Vehicles Manufacturing Loan Program (ATVM) made them pay the money back for EV investment. D’oh.
I doubt it. 450mm wafers have a lot of drawbacks. It starts getting into the size range where physics really starts working against you. ASML was working on (and had nearly finished) EUV (QXE systems) and DUV (QXT systems) systems capable of handling 450mm wafers before the plug was pulled. I remember there already being a lot of doubts about the practicality of 450mm wafers. When required 300mm wafers and pods can be manually handled, something that is basically impossible with 450mm wafers. Their dynamics during robotic handling is also... challenging to say the least.
Intel and Samsung COULD have gone to 450mm, but I get the feeling it was a bit of a game of chicken, all of them waiting to see who would pull the plug first, hoping they could put the blame on the other guy for backing out first.
One of the reasons given for the movement towards chiplets(AMD) and tiles(Intel) is a combination of lower yields with monolithic chips and that certain parts of CPUs don't need the full-throttle approach.
Hence plans for the chiplets so that things can be mixed and matched.
With rumours that nVidia are sticking with monolithic GPU chips for it's forthcoming RTX40 series, will be interesting to see how the manufacturing costs v performance stack up for both approaches.
Having said that any new approach is likely to have some teething problems. So AMD may have to take a short-term hit in manufacturing costs to pull it off.
It works for the individual parts and improved yield (by only stacking up known-good dies), but you get a lot of complexity by ending up much more 3D in the stacks, with all the fun and games of the interconnects with vias and specific interlayers to hook everything together and connect them all up.
But the biggest challenge as ever is heat, and how you can get it out of the layers that are buried towards the middle of the stack. It can be done with clever designs and some thought on the layout (like putting the more active and intensively heat generating bits towards the outside where you can get at them more easily to cool them), but it still adds a lot of headaches to the overall design.
You do also get the benefits of there being nothing to stop you mixing and matching nodes. So you can make the memory layers (for example) on older technology that works just fine for them, and the CPU or other more detailed parts on the cutting edge tools at the smaller nodes.
That way you don't have to tie up your state-of-the-art manufacturing equipment producing stuff that older and cheaper kit can quite happily handle.
But does that mean that 450nm makes sense? If it was just a gimmick to squeeze smaller competitors out, then maybe it still doesn't make sense for TSMC even if it is the biggest fish. It would improve yields slightly, but the investment it would require may be wasteful, given how much money it costs to build fabs for each new generation of chips; if you can't do both, advanced technology is still the right focus.
Imho, and from what I've seen, 450mm/18" wafers never really made sense in the first place. Lots of fabs still want to run 200mm/8" wafers, some would even love it dearly if there was still new 6" and smaller equipment being made. For some products high throughput and easy handling is superior to high yield per wafer and smaller wafers win out in that regard. Lots of 40, 50 or even older litho equipment (think ASML PAS5500 and the likes) is still in use in fabs that just keep churning out the products long past the fabs supposed economic lifetime. The equipments been paid off long ago and every wafer coming off the line is just more profit until the day something important breaks and is no longer available or fixable.
There is also this interesting article <https://www.nextplatform.com/2017/04/27/mapping-intels-tick-tock-clock-onto-xeon-processors/> that contains an interesting piece about the economics of moving from 300mm to 450mm wafers.
I found it whilst trying to find out the physical wafer size of Intel's processors.
This article <https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scalable-review> would seem to imply Intel get 40-core Ice Lake Xeon 10nm Processors from a single 300mm wafer.
This article <https://screenrant.com/intel-alder-lake-desktop-cpus-two-die-sizes-variants-revealed/> gives some physical sizing information for Alder Lake, which suggests Intel are experimenting with different die sizes to get more out of each wafer.