You go to Serve
With the Microsoft you have, not the Microsoft you wish you had.
Microsoft is reportedly designing its own homegrown server and desktop-grade processors using CPU blueprints licensed from Arm. Details of the effort, reported by Bloomberg on Friday, are light, though it is described as a "major commitment," according to sources familiar with the matter. One family of the chips will be …
"Memory and GPU on the same die"
M1 Macbooks gained a lot of memory performance but lost expandability. '16GB is enough for anyone'?
They could include superfast SSD storage on the same die as well. Perhaps this will happen soon.
"Otherwise, not really worth R&D investment."
Amazon has their own Graviton CPUs and the latest Ampere CPUs are faster than current AMD EPYC processors in many tasks. This R&D could very well be a benefit for consumers and MS alike, since I'd like to use the Apple M1 for its performance and low power features, but I have zero interest in using Apple computers or MacOS.
MS can't rely on x86 indefinitely only since ARM is getting more performant and energy-efficient each year.
You can easily make things larger on a die than the minimum size. Whether you would want to given the technologies are not really compatible - you dont want to multilayer FLASH on a potentially hot CPU die. And I certainly wouldnt want an SSD on a device that MS had made - you'd never know what was on it that YOU couldn't see!
"You can easily make things larger on a die than the minimum size. "
That's not how it works at all. You don't fab previous generation tech on the same wafer. It's technically not feasible. And makes no sense. Creating a big die with SDRAM and CPU/GPU wouldn't be viable economically to test. And as said flash doesn't work below 20nm anyway. It's the pesky physics. The only viable solutions are multi-die. Where you can mix 'n' match older process dies and mount them on the SOC die or mount everything on an interposer and create a SiP. As Apple have done.
I think were talking across each other. I've made 200u long gates on a 5u process so I doubt there is any difficulty in making a 20nm feature on a 7m process. However you probably couldn't make a flash even a 20nm work on the 7nm CPU process.
But I would never knowingly by a package that might contain an SSD with god knows what already on it and invisible to me and I hope no-one else would consider it an option either.
Don't forget, below 28nm there is a switch to non-planar finFET processes. It's a totally different type of process with respect to transistor construction. e.g Gate lengths are more-or-less fixed for digital components. It is the number of fins that vary to give differing drive strengths etc.
One doesn't really ever know what is on a die. How could you?
And what is Intel doing?
When amd64 surfaced with Opteron it took a few months and Intel was out with X86_64. (even though they tried pushing Itanic for the 64bit). They had a team in Israel already preparing this for years. Atoms take on Arm has been a failure, Unable to match performance with power consumption. Intels dreadful Iris graphics. North/South Bridge etc. Why they do not have a SOC with lots of onchip cache is beyond me. The only rocket science about apple M1 is its 5Micron process. and we all know Intel still have problems with yield on 10Micron. Becoming to fat and lazy over the year with little competition. Maybe Intel will become the Nokia of the 2020'ies. It is well deserved.
They are not on the same die, they are on the same package. Different silicon, if you want to compare it to something from the pc world, pentium slot 1 cpu, cache on the same package not die. Before then it was on the motherboard, after it was on the same die.
There are physical limits, which can't be gotten around. With Azure, you are talking about servers with RAM in excess of 1TB in many instances. You won't get that on the same SoC as the processor cores - in fact, you will have multiple SoCs sharing memory.
Just look at the Fujitsu super computer, a couple of hundred thousand 48-core processors...
There is a world of difference, in terms of scale between entry level laptops and high end servers, or even desktop workstations.
Apple have achieved a huge performance push, for ARM, in the M1 chip. But it will be what they do for the iMac Pro and Mac Pro that will really show what is happening. I don't think you will find 30 - 60 cores and 512GB RAM all on the same SoC, when it comes to the Mac Pro.
All the customers I speak to say that they are saving around 40% by moving to ARM on AWS using their Graviton Processors. That's extremely significant savings. Moving to ARM is very good for competition and the industry in general. So good news, and hoping it won't take years for them to develop.
Well I can see the attraction for device manufacturers of apple's model with ARM in consumer products stick everything including RAM on the SOC, that way you have to replace the whole device just to add some RAM... As for the server chips again we've known for years ARM is far more energy efficient if that saving is passed onto the customer if you are a large customer scaling to hundreds or thousands of machines thats a lot of saving!
With all the focus on Intel slipping into dire straits, this seems like a good time to remind people that AMD did in fact bet on ARM becoming a thing in server spaces since a few years now: https://www.amd.com/en/amd-opteron-a1100
Not sure MSFT will truly get their chips from an in-house design department. For all we know they'll end up leaning on AMD or similar company like they did with Qualcomm before.
I had an XBOX360 and good on Moistness-soft, they repaired the unit free of charge.
It was due to the in-house design of the ASIC (https://www.designnews.com/engineering-disaster-xbox-360-red-ring-death)
They have probably learnt by this serious error in judgement, and it will be out-sourced for their ARM design implementation.
Then again, sometimes despite having vast sums of money, they may try and cut corners.
Yeah, I hope they revisit ARM CPUs again at some point in the future. I remember waiting for ages to see the 96boards version of the A1100 released, but that clearly never happened: http://www.lenovator.com/product/103.html#params
Maybe with all the money from Zen, AMD can invest it in more ARM server chips? :)
NT on Alpha was because DEC was important then and also writing good NT software. DEC had VAX clusters, so naturally figured out Cluster using two vanilla Servers with dual SCSI cards, external isolator if a card failed and two sets of shared SCSI drives.
Also did Intel even have a 64 bit CPU when the 64 bit NT 4.0 for DEC alpha 64 was produced?
It was a niche alternative to UNIX or DEC's OS, not serious competition for Intel.
arm arm arm, amd amd amd... meh. Intel has and still does make the most stable and advanced procs and chipsets. so their business gets cut in half or more, that's business, and its happened to many before. I for one just want to be able to build one more workstation before my time is up using Intel when they have DDR5 platform on the upcoming socket 1700 for consumers. I'll be happy. Regarding Intel business strategy in the past decade, them making bad, late and no decisions, all fault lies with the previous ceo, he was busy getting his noddle wet and not working and staying on top of decision making. His lack of leadership is why the company has been behind with many issues. They also should have bought out arm many many years ago when they had the chance to, they were thinking about it, and did nothing. I hope we dont see them go out. If we do, amd will be the next "chipzialla" and they make crap half the time, buggy drivers, unstable chipsets, and dont get me started on the bulldozer BS, they lied to customers on core count and performance. I used to be a fan of amd, but after bull s bulldozer, I got burned. and their procs run HOT as hell.
Well, Intel don't make the most advanced processors anymore. They're slow, hot and (of late) riddled with various security flaws requiring many microcode and firmware updates, further sapping performance. Meanwhile there's many and increasing numbers of very happy AMD Epyc and Ryzen customers and AMD has a strong road map.
Intel's survival now depends entirely on AMD keeping the x86/x64 ecosystem supplied with big powerful chips. If they don't, there isn't going to be an x86/x64 market for Intel to return to, and they're awfully far behind the curve if they were to pivot back to ARM.
... and how many times, before they all realize that it's not going to happen, and put it to rest.
Apple doesn't do servers. They don't want to. So much for the M1.
Microsoft was all-in on Cavium/Marvell's ThunderX2/ThunderX3 for Azure. Fail. ThunderX went the way of the Dodo bird. Marvell bailed out on it.
This is Microsoft doing a Me Too! because of Amazon.
It's nice to have specs and blueprints and plans and press releases. A viable business these do not make.
Windows on ARM64 is going nowhere fast. True That.
Good luck getting anywhere near Intel Xeon's performance. Facts are stubborn things.
Please don't give me the Ah, But SPECTRE! ARM64 is just as vulnerable to that particular brand of shitshow as Xeon is.
Ah, But CHEAP! Yeah, you get what you paid for.
If Arm can't hack it at the server level then what is this then?
Intel needs to get itself sorted out. There are some very powerful ARM based systems out there.
X86 is not the only game in town these days.
I Apple decided to make a pure server chip (M1 minus GPU) they could easily do it. I think that this might be closer than you think.
"Intel needs to get itself sorted out. "
The main thing that's kept Intel alive the last few years is inertia. Inertia, and a huge pile of cash (it's not called the Wintel world for nothing).
Intel are heading the same way as Boeing. It's about brand managerment, and partner relationships (er, not you mr krzanich) and other such intangibles.
> If Arm can't hack it at the server level then what is this then?
That's a Supercomputer from Fujitsu. Not a cloud or datacenter server. Learn the difference.
> I[f] Apple decided to make a pure server chip (M1 minus GPU) they could easily do it.
If I had feathers and a pair of wings I'd be a bald eagle.
And no, it's not that easy to take a general-purpose CPU specifically designed for desktops and make it into a general-purpose server CPU. They are quite different the two.
Quit making generalized statements like that. They only show how little you know about CPU's.
You may as well apply your last sentence to yourself, as it looks like you forgot some lesson from history.
x86 is born as a desktop CPU rather than a server one.
Regarding difference between desktop and server CPU, I already eared that: This is what company sending servers with their own CPUs said to the world when they started to feel the hot from (comparatively) cheap x86 server lines.
BTW, yes, there is a difference between desktop and server CPUs, but the instruction set is moot for that purpose.
> x86 is born as a desktop CPU rather than a server one.
Thanks for the 100% orthogonal non-sequitur.
Take a look at the specs of a Core-i7 or Core-i9 and compare them with a server-class Xeon.
Even something as high-level as the base ISA is different between the two CPU models.
[ ... ] the instruction set is moot for that purpose.
No, it's not moot. At all. That's actually one of the high-level, but major, differences on x86_64. The ISA is not exactly the same. Xeon-class CPU's support SIMD instructions that are not available on Core-i<x> CPU's.
Clueless boob with strong opinions. The best the Internet has to offer.
Please, avoid "ad hominem" argument.
What I said is what I can say after several years of working pieces of OS on several CPU architectures.
If you want performance, ISA matters, but at soon as the expected features are present, what matters is implementation and memory access (and I/O, if your data set needs it). Oh, and regarding SIMD, you should have a look at what last ARM ISA offers, you would discover why Fujitsu uses it for HPC beasts.
What has happened is the major buyers of sophisticated CPUs -- the cloud companies -- want performance per watt as well as performance per rack unit.
Compare the resulting pricing for AWS: Intel US$4.08ph, AMD $3.70ph, ARM $2.18ph. Graviton2 is about 20% slower than the equivalent Intel server, but about half the cost. Remember this is the second release of Amazon's ARM design up against the decades of tuning of Intel's design, and the difference is only 20%. Obviously that difference has further to shrink.
Other cloud providers will be facing similar pricing, but with the advantage that they use more of their compute cycles for their own services. That is, they can more readily re-target their internal services from AMD64 to ARM64 than Amazon's clients can.
I can't see that any company will take the risk of developing a server ARM chip. As you point out, plenty of startups have been burned. So the market will leave that development to the cloud providers themselves, who have abundant engineering resources to turn ARM IP into silicon.
The major difference between now and the past is Intel's years-long failure to deliver process improvements compared with its competitor TSMC. There is little reason to expect that to change. That failure alters the economics for cloud companies. In the past chips with better architecture would have their performance blitzed by Intel's process improvements. DEC's Alpha being an excellent historical example. So there was no incentive for cloud companies to explore CPU architecture. That blitzing-by-process-improvement is no longer in Intel's power to do. An architectural improvement over Intel's microarchitecture is now a long-run win. So CPU architecture is now worth cloud companies' efforts.
None of this is likely to be reflected in the "enterprise server" market. But that market is becoming increasingly odd and continually smaller. In many ways very much like the IBM mainframe business of the pre-PC 1980s. And just as likely to have a nasty surprise.
"chips with better architecture would have their performance blitzed by Intel's process improvements. DEC's Alpha being an excellent historical example"
Not entirely accurate, not even in engineering terms, and x86 vs Alpha was about far more than engineering.
The Intel cash mountain enabled Intel HQ to convince HPQ HQ that Itanium was going to take over the world. Not Intel x86-64 (which Intel HQ had repeatedly said couldn't be done), Intel "Industry Standard 64bit" Itanium.
"blitzing-by-process-improvement is no longer in Intel's power to do."
Correct, and has been for a few years, but no one in the IT department seems to have noticed. Lots of other people have, of course.
Itanium is dead. It was never really alive, once AMD64 showed the way forward.
Now, AMD64 rules.
Alpha features live on.
x86? Not so much, despite process improvements, which (as you rightly note) have led x76 basically nowhere both in the "enterprise server" market and in the low power ("mobile"/embedded) market for the last few generations of x86, and with even less likelihood of success in the future.
Many many years ago before WinRT was thrust upon us, MS were known to have licensed ARM cores.
Many of us were anticipating a move into ARM servers and desktops, driven by MS standardised ARM designs (much like the PC world hardware spec is actually these days an MS inspired standard. It's why your PCs audio jacks are the colours they are, for example), hosting full fat Windows and even more exciting being an open standard such that Linux got an easier ARM ride. MS even showed off an ARM based machine running Windows 7 and Office, printing to an Epson ink jet printer. All they'd done was rewrite the HAL and simply recompiled the rest, and it worked.
And after all that what we got was WinRT.
Now it feels like they're finally getting it, 10 or 12 years later... They could have lead this charge to ARM, but instead they're following it.
At least they are going that way now. Better late than never.
Will MS do it right? That is the big question.
Their ARM efforts so far are really nothing much to write home about. Using Snapdragon chips is really nothing different from using a commodity X86 chip.
MS has a big skillset that needs to be filled. Apple took what? close on 10 years getting that experience before the M1 appeared. We could all see the writing on the wall with each successive A series CPU outperforming the best that the likes of Quallcomm and Samsung could offer. Now the M1 is wiping the floor with many X86 chips.
IMHO, MS really needs to get their offering sorted out before the end of 2021. Will they?
2021 ans 2022 will be very interesting times on the CPU front.
Biting the hand that feeds IT © 1998–2022