Bork! Bork! Bork!
But they can’t run a web site. Touring the E1080 page crashed. Running on Power?
IBM's heavy-metal arm has officially brought Power10, its first 7nm chip, to market with the launch of the E1080 – a server system it claims blows x86 rivals out of the water for performance and security. The E1080 is the first commercial outing for IBM's Power10 chips, unveiled at last year's Hot Chips conference and …
This post has been deleted by its author
Binraider,
IBM will be competitive with *IBM*, as most buyers are likely to be IBM customers upgrading.
IBM never trys to compete on price with *lesser* kit, they upsell on features/performance.
The big question is can you afford it over the full lifetime ?
IBM make some nice kit but you will always pay for it ....... at a premium !!!
If you are talking $/performance, of course not. That's not what it is designed for. If you don't understand what it is designed for, you've never had any involvement with IT on an enterprise level.
Enterprise buyers care primarily about avoiding unplanned downtime and TCO. IBM's solution has superior resiliency features to anything offered by Intel or AMD, which leads to less potential for unplanned downtime (at least the kind caused by hardware failure not human error which no one has a solution for) They care about TCO - i.e. how much does everything cost, not just the acquisition cost of the server hardware. First, how much does it cost to maintain - and that's obviously cheaper if the servers offer superior performance per core so you have fewer servers. But that's small potatoes, if you need one or two additional sysadmins for the larger x86 server farm that's lost in the noise of all this.
Most important is how much does it cost to license? Licensing costs for enterprise applications like SAP or Oracle can easily dwarf the hardware price. It does you no good to buy 2x or more cores from Intel or AMD at say (just throwing out a random number here because I have not priced Intel vs IBM servers in many years) 1/4th the price of IBM's solution, if your licensing cost for all those extra cores are larger than then 3/4th the price you saved. Each. Year. It is not uncommon for the yearly licensing/maintenance price for the software you run to cost more than the servers themselves did to purchase. That's pretty much always the case with x86 servers with the applications that cost the most to license.
On TCO. Let's say Let's say we're talking a hi-availability, but relatively low volume e-commerce site with 50,000 sales a day; multiple redundancies, multiple locations and 24/7/365 uptime requirements.
Fag packet estimate TCO to achieve this in SPARC. I'd be thinking of the order $10M; including planning for staff for the duration.
Same thing done in X86; easily half that. Even with additional hardware redundancies built in to account for the "lesser" arch choice. Staff, developer availability, wider choice etc being major factors in the cost reduction.
Chuck Power at the same problem and I'd estimate easily double the TCO, for not meaningful difference in uptime over SPARC.
One could go even more niche to IBM mainframe territory. Stick a zero on the price tag of course; but it's worth it when you need the throughput and uber uptime. Power isn't competing with the mainframe brigade, it's functionality absolutely is competing with Sparc. And if you can't beat that on price you've already lost!
I see POWER as a contemporary mainframe, not at all in the same world as AMD, Intel, or historic SPARC chips. POWER architecture seems to merge main memory with mass storage, throwing lots of that fancy memory around to keep things moving. It is expensive per CPU but has vastly more throughput, not just processing power. POWER 10 makes a Xeon look small, both in price and performance. But if you don't need it, you wouldn't want to bother with it.
As an ex UNIX sysadmin, a lot of this resonates with me, but I have to tell you, you are fighting a battle that most customers and buyers have long moved on from. You might not like it, I might not like it, but its the cold hard truth.
>> IBM's solution has superior resiliency features to anything offered by Intel or AMD
True 10 years ago - not in any meaningful way now. Xeon and Epyc have largely caught up on the CPU resiliency features apart from a few corner cases. IBM still get the advantage of owning the whole hardware/firmware/hypervisor/OS stack (at least assuming the customer isn't deploying SAP HANA in which case you have to use Linux for Power which is let's face it even more of a boutique product than Linux for Itanium was). But for the price you pay, an x86 implementation can afford to deliver resiliency in other ways. Long story short hardware reliability isn't usually the weak link in the chain these days.
>> They care about TCO - i.e. how much does everything cost
When TCO models take into account the cost of IBM support, and the cost of finding those IBM specialists to manage the kit, it rarely works out lower cost than hiring ten-a-penny Linux and VMware admins (with apologies to those folks - this isn't how I think, but it is how decision makers think)
>> Licensing costs for enterprise applications like SAP or Oracle can easily dwarf the hardware price.
1. Everyone is moving off Oracle as fast as they can - sure there are those who are stuck or those who have Stockholm Syndrome, but this isn't a growing market. It's a shrinking market.
2. SAP isn't licensed per socket or per core, so it's of no consequence there. Everyone on SAP is moving from databases licensed per socket/core to HANA which again isn't.
TL:DR the best tech rarely wins. Power 10 will do well with existing IBM customers who have deep pockets, but it will continue to slowly slide to a similar position to IBM's System Z portfolio. A fantastic money-earner for IBM and super-important for their small and decreasing number of big customers, but pretty much irrelevant for 99% of folks in IT.
Interesting you should mention HANA . . . last year SAP decided to deploy the outgoing Power E980 servers in their own SAP HANA Enterprise Cloud (HEC) :- https://insidesap.com.au/sap-hec-to-run-on-ibm-power-system-e980/
I think this shows that SAP understand that the Power architecture is perfect for their in-memory HANA databases. This is especially true for the new E1080, that brings a new open memory interface (OMI) that opens up the opportunity to add new types of hybrid memory/SCM/persistent memory. Power10 will also enable memory clustering (aka memory inception) so that customers can pool memory across servers.
"There's a disparity in IBM's numbers on the AI front, though..."
'Artificial Intelligence' and 'Machine Learning' are as deeply entrenched in reality as is 'homeopathy', 'cold fusion', and {choke; CHOKE} quantum computing. Look no further than simply asking any proponent of these alleged "fields" precisely how they (it) works.
"I have found that the reason a lot of people are interested in artificial intelligence is the same reason a lot of people are interested in artificial limbs: they are missing one."--Donald L. Parnas
“The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim."--Edsger W. Dijkstra
Annoyingly the POWER10 hardware isn't fully open.
https://www.phoronix.com/scan.php?page=news_item&px=IBM-POWER10-Not-All-Open-FW
So you are better off buying the POWER9. Actually the only guys I know who are selling this stuff properly are Raptor Computing (https://www.raptorcs.com/). The rest are just faffing about.
This reference is comparing a Power system with an OpenPower system. These are two separate product lines.
The announcement is a Power10 system, not an OpenPower system.
The features of the two product lines are different, and if/when an equivalent OpenPower processor to Power10 is launched, it may be that the features that this tweet is complaining about are not available. In fact, they may not even be in the silicon, as OpenPower is all about chip builders putting Power cores on their own chips in the same way that ARM chip builders do, and the other features (such as the specific memory controllers) may be optional.
Then you don't understand what OpenPower is all about.
I know that in the past for Power 8 and Power 9, the CPU chips are very similar, but even within the Power range, there are differences in the chips that go into the scale out, scale up and enterprise systems.
This extends to the I/O interfaces like PCIe. CAPI and NVLink differ between different implementations.
But back to the OpenPower point. OpenPower allows chip builders to combine a Power core with whatever other devices on their silicon as they want.
From my reading of the Power chip design, looking at a single die, you see the execution units that make each processor core gathered together, and you then se on the same die memory controllers L1 and L2 cache, and the off-die interfaces that allow multiple dies to be combined in DCM and MCM configurations. In the past, memory controllers have serviced more than one core, although it has become more normal to have dedicated memory controllers for each core. This is because the transistor budget per die has been increasing, but there is little point in increasing the number of transistors per core (they're complex enough), so they add more support on-die top allow the cores to speed up getting data onto and off of the die better.
So although the implementation that they are talking about here has the particular memory controllers, it is not certain that the ones that find their way into the replacement systems for say the L922 system that is an OpenPower system rather than a Power9 system.