
Wow... 32 sockets, SIMD, IEEE754, capacity on demand with spare cores... looks like I was reading POWER6 specs... but 10 years ago.
Oracle and Fujitsu have launched Fujitsu’s SPARC M12 server, claiming the world’s fastest per-core performance. It is the follow-on to the existing M10 server line, which topped out with the Fujitsu M10-4S server which scaled up to 64 processors and 64 TB of memory. The largest M12 configuration is 32 processors and 32TB of …
> [ ... ]However Oracle has claimed SPECint and SPECfp benchmark records with the new system.
We would all love to see these alleged record-setting benchmarks published by Oracle.
Because searching at SpecCPU for Oracle and SPARC yields this: Oracle Corporation SPARC Enterprise M8000. Which is absolutely nothing to write home about. It's actually on the mediocre side. But reassuringly expensive, I am sure.
Compare it with: Intel DX79SI motherboard (Intel Core i7-3960X Extreme Edition).
Oracle M8000 configuration: 64 cores, 16 chips, 4 cores/chip, 2 threads/core
Intel Core i7-3960X configuration: 6 cores, 1 chip, 6 cores/chip, 2 threads/core
Both results are from 2011. The SPARC results are the latest published by Oracle. I was unable to find more recent (or better) SPARC results from Oracle at SpecCPU.
The first thing that pops out is that buying some monster SPARC space heater with 16 chips, for a king's ransom in USD $$$$, will achieve less than half the performance of a consumer-level Intel motherboard + chip that one can order online.
Either publish the SpecCPU numbers, or stop making unsubstantiated and unverifiable claims.
First off, that's a seven-year-old result. Second, it's a single-threaded score, not a throughput score, so your complaints about "less than half the performance of a consumer system with only one chip!" are nonsensical.
I hate Oracle as much as the next girl, but their SPEC record is legitimate. Please look properly before going off on rants...
https://spec.org/cpu2006/results/res2015q4/cpu2006-20151026-37722.html
To the best of my knowledge, this is currently the highest-performing processor on SPECint_rate. High-end Power and Xeon, last I looked, top out at 900-1000 rate per socket. Oracle, of course, conveniently avoids publishing single-threaded numbers - but my guess is that they range between "mediocre" and "okay, I guess."
> First off, that's a seven-year-old result.
I clearly stated that in my post.
Second, it's a single-threaded score, not a throughput score, so your complaints about "less than half the performance of a consumer system with only one chip!" are nonsensical.
Bullshit. What is nonsensical is your example from Oracle.
I was comparing apples-to-apples. I.e. SpecInt and SpecFP vs. SpecInt and SpecFP.
Your example is apples-to-oranges. Namely SpecInt-Rate, which is a totally different - namely throughput - benchmark. Comparing SpecInt and SpecFP to SpecInt-Rate is nonsensical indeed.
Or, are you perhaps suggesting that SpecInt and SpecFP aren't relevant?
I'm not comparing SpecInt to Specint_rate. I'm comparing rate to rate. M7 is currently the highest-scoring rate result. Therefore, it holds the record. Period. I'm not saying it's the Second Coming. It's ungodly expensive and primarily runs a doomed OS.
And yes, non-rate Specint and Specfp aren't currently relevant, due to broken subtests (462.libquantum, most egregiously) and abuse of autoparallelization (2006 reporting rules allow autopar to an extent spec2000 reporting rules did not; this was a mistake). No non-x86 vendor has published a non-rate SPEC result in several years, as far as I know. Or do you seriously think that Intel processor performance has improved since Core2 by the many thousands of percent implied by their libquantum scores?
SPEC2006 non-rate scores are pretty useless at this point, and a new SPEC revision can't come soon enough. (Among other things, to fix how weirdly their toolchain tends to behave when not using the prebuilt spectools...)
The article specifically mentions SpecInt and SpecFP. That's what Oracle said. So:
> M7 is currently the highest-scoring rate result.
Is that your own assertion, or do you have publicly available documentation to back it up? Where are the SpecInt and SpecFPU links to the Oracle SPARC M7 results? Please post them here. Not SpecInt-Rate.
> due to broken subtests (462.libquantum, most egregiously
How exactly is 462.libquantum broken?
> abuse of autoparallelization (2006 reporting rules allow autopar to an extent spec2000 reporting rules did not; this was a mistake
That is completely besides the point. Everyone runs the same tests.
> Or do you seriously think that Intel processor performance has improved since Core2 by the many thousands of percent implied by their libquantum scores
The Intel compiler has been tuned - by Intel - to improve the performance of libquantum by the hundreds of percentage points that you see. That's the reason behind libquantum's Intel performace numbers. On the same exact Intel box you won't get the same numbers if you run SpecCPU with GCC or clang.
Stop trying to change the subject by interjecting tangents about the suitability of the SpecCPU benchmarks design, or Intel's work on their own compiler, or your own personal opinions about the relevance of SpecInt or SpecCPU, or by interjecting phony comparisons between the various tests in SpecCPU that weren't even mentioned in Oracle's claim.
Either you back Oracle's claims with verifiable facts - if you have any - or you don't. You are entitlted to your own opinions, but not to your own facts.
> Is that your assertion, or do you have do you have publicly available documentation to back it up?
Look it up. No other result currently surpasses M7 on SPECint_rate, WHICH IS WHAT I SAID - "M7 is currently the highest-scoring rate result."
By the way, it's what Oracle has claimed as well - let me know if you can find a single example of Oracle claiming to have a non-rate record. THIS is what they claim: "The single-processor, SPARC M7-based system set new records for SPECint_rate2006, SPECint_rate_base2006, SPECfp_rate2006, and SPECfp_rate_base2006, demonstrating that SPARC T7-1 can complete a job very quickly and is able to process faster than any other single-socket system." The Register was clearly not referring to single-copy SPECcpu here. Your increasingly-pathetic nitpicking aside, M7 holds the per-processor record for int_rate. I have no fucking clue how it performs on single-thread; my guess is somewhere in the 25-30 int specspeed range, but I haven't run SPEC on M7 myself, so that's an estimate.
Dear God, you're like a reverse Kebabbert.
> let me know if you can find a single example of Oracle claiming to have a non-rate record
Quote from the article. Second paragraph, last sentence:
However Oracle has claimed SPECint and SPECfp benchmark records with the new system.
Do you read English?
The article said "Oracle has provided SPECint and SPECfp benchmark information, saying these servers have set records:" followed by a table of _rate figures. It's very obvious the Register is referring to those _rates, not to some magical reference to SPECspeed numbers that neither Oracle nor Fujitsu seems to be claiming. You're being very pedantic, and it's frankly a bit bizarre.
> You're being very pedantic, and it's frankly a bit bizarre.
I read what is written. That makes me pedantic and bizarre.
> It's very obvious the Register is referring to those _rates,
Really, it's obvious? Writing SPECint and SPECfp obviously means, in fact, SPECint-rate?
No, they don't.
Do you have anything useful to add to this discussion besides Alternative Facts and ad-hominem attacks?
> Could you please try to keep the discussion a bit more polite?
Not when people resort to Alternative Facts in support of their ad-hominem attacks directed at me.
Please re-read the entire thread. It was not me who started the personal attacks, and I refrained - for a while - from retorting in kind.
You may want to re-submit your remarks about politeness to Mr. Dusk instead.
Deliberately eschewing facts, and replacing them with Alternative Facts, or non-existent facts, or comments about God and Kebbabert, is impolite, to say the least.
Thank you.
I emailed the writer of the article. This was the reply:
"Hi Kira,There is a table from an Oracle Fujitsu doc in the article and that lists the SPECint and SPECfp benchmarks. I've attached it here;
They are 5 SPECxx _rate 2006 ones.
Cheers,
Chris.
Chris Mellor Storage writer at The Register. "
It confirms that they're just referring to the _rate scores after all, as I stated.
I humbly ask for your apology for the "Alternative Facts" remark, as it was not me introducing them.
> It confirms that they're just referring to the _rate scores after all, as I stated.
> There is a table from an Oracle Fujitsu doc in the article and that lists the SPECint and SPECfp benchmarks.
1. In the world that I live in, words matter, and have well-defined meaning.
Whenever someone refers to "SPECint" and "SPECfp" benchmarks, it means that there exists a publicly viewable submission at SpecCPU containing the results for the SPECint and SPECfp benchmark tests.
As of now, no such submission exists for Oracle, either for SPECint, or SPECfp, or SPECint-rate, in support of their claims relating to this new M-12 server.
2. You are quoting content from some personal email exchange that no-one can verify is accurate, or that it even exists. This alleged content is presumably in support of your earlier assertion that SPECint and SPECfp mean, in fact, SPECint-rate.
3. Your assertion is false. SPECint and/or SPECfp do not mean SPECint-rate. Each of these terms have their own well-defined, and precise, meaning, and they are not interchangeable.
4. You called me Kebbabert in reverse, pedantic and bizarre, just for calling out inconsistencies and inaccurracies in your own statements.
5. The two quotes from your latest message - at the top of my reply - above contradict each other. Either it's a SPECint-rate result, or it's a SPECint result, or it's a SPECfp result. At any rate, neither of these results are available as submissions at SpecCPU. That makes them invalid, or, at a minimum, the functional equivalent of non-existent.
6. There are precise guidelines and constraints for submitting SPEC results to SpecCPU. One of the constraints is that the results must be reproducible by anyone. The second constraint is that access to the hardware and software that produced the results must be available.
7. None of the SpecCPU requirements outlined above have been fulfilled. The SPECint-rate sheets that you refer to are not valid SpecCPU benchmark results. At best, these are marketing materials.
8. You are now demanding an apology.
No apology is forthcoming.
"Or, are you perhaps suggesting that SpecInt and SpecFP aren't relevant?"
They may be relevant but we don't have them for the latest chips. We can only compare SpecInt_rate. Comparing SpecInt for some completely different chip from 7 years ago seems less than useful.
I do agree that SpecInt_rate is a rather questionable metric. I can beat any SpecInt_rate by placing a sufficiently large number of Raspberry Pi's in a sufficiently large cardboard box and calling the result a "system". Because SpecInt_rate is just how much throughput you can get from running the benchmarks on all cores of your system, and there is essentially no requirement for communication between the tasks.
> They may be relevant but we don't have them for the latest chips. We can only compare SpecInt_rate.
And that is precisely what makes Oracle's claims suspicious, if not dishonest.
No-one can verify their claims, and they have not provided any SpecCPU - SpecInt and SpecFP - results themselves.
The only way these claims would be credibile is if they were posted at SpecCPU, and were run by some independent third-party such as McKinsey or Accenture, who would have certified the accuracy of the results.
@ST:
"> [ ... ]However Oracle has claimed SPECint and SPECfp benchmark records with the new system.
We would all love to see these alleged record-setting benchmarks published by Oracle."
Yes, and so would the Reg - from the article, immediately below the SPECxxx_rate, SPECjbb & STREAM table: "When we looked we couldn’t find them; presumably a simple update timing problem."
But then you go on to say:
"The first thing that pops out is that buying some monster SPARC space heater with 16 chips, for a king's ransom in USD $$$$, will achieve less than half the performance of a consumer-level Intel motherboard + chip that one can order online."
That comparison is akin to comparing a bus with a motorcycle and concluding that there's no point in buses, purely because they're slower and more expensive than motorcycles. However, if you need to transport 50 people and their luggage a couple of hundred miles across country then a motorcycle will not be a very good solution.
"Either publish the SpecCPU numbers, or stop making unsubstantiated and unverifiable claims."
To whom are you addressing this demand? It's obviously not the Reg, who has already said (see above) that it couldn't find the figures yet.
There's a couple weeks lead time between hardware release and showing up on spec.org, IME.
Anyway, I think ST's larger complaint was that the article said SPECint when it meant SPECint_rate, which is the same benchmark run differently (with multiple parallel copies, to benchmark whole systems.) It's a minor thing, but evidently important to him.
> It's a minor thing, but evidently important to him.
No, it's not a minor thing, it's a major thing, as it measures completely different things.
SPECint and SPECfp measure how efficient is the compiler at optimizing single-threaded code running on a single CPU, all the benchmark tests in SPEC2K6 being single-threaded programs. For SPECint and SPECfp, the SPEC2K6 benchmark binds each test program execution to a single CPU with numactl(1), or an equivalent if it exists. See 462.libquantum.
SPECint_rate measures how efficient is the computer system at optimizing execution throughput across several CPU's/cores. The SPEC2K6 test programs being run in SPECint_rate are bound to several CPU's/cores, also with numactl.
If there is no numactl(1) or some other CPU affinity utility available for the particular system being tested, the entire SPEC2K6 exercise is pretty pointless.
Except a) it was obvious to anyone that "here's SPECint and SPECfp results:" followed by a list of clearly marked _rate scores referred to the rate, as confirmed by the Register; b) for a 32socket machine, most users are going to care about SPECrate more than SPECspeed; and c) the use of autoparallelization in SPECspeed numbers - which is fully legal under SPEC rules - means it's not measuring "single-threaded code running on a single CPU" at all. When an Intel system with two CPUs and 8 cores runs SPEC subtests with OMP_NUM_THREADS to 16 and invokes icc with -parallel, nothing about it is "single CPU."
Your own i7 link shows no sign of setting core affinity with taskset or numactl, which is generally marked in the result and NOT done by the SPEC run harness itself. In addition, it uses autopar and sets OMP_NUM_THREADS.
> NOT done by the SPEC run harness itself.
Dude, give it up.
You keep trying to impress your imaginary Internet audience with your imaginary knowledge, when it's obvious that you have never looked at a SPEC2K6 configuration file, let alone written one.
You bring up 462.libquantum as being bad - whatever that means - but you can't explain why it's bad, or why it shouldn't have been included in SPEC2K6.
You keep ranting on about the SPECint and SPECfp benchmarks announcement in the article, and keep on insisting that SPECint and SPECfp actually mean SPECint_rate. Because, in your world, SPECint, SPECfp and SPECin_rate are pretty much the same thing. In reality, they're not.
You don't understand the difference between some marketing materials posted online, and an official benchmarks results submission at SpecCPU.
You don't understand the difference between OMP_NUM_THREADS, and something like numactl. You don't understand the difference between numactl and taskset, either.
Worst of all, you don't even bother searching in Google for some minimal research before posting one of your gems of wisdom.
You sound like the typical someone who's been around technical people long enough to have picked up a few terms and concepts here and there, but has never actually done any real work in this domain, and has never taken the time to study this subject and understand its details and complexities.
This buzzword showing-off works in a bar after a few pints, or in the Marketing Department. It doens't work here, because you have no idea who might be reading your pronouncements.
Cheers.
I am sorry but I am absolutely sick of seeing this unjustified ad-hominem attacks against someone you clearly do not know, cannot fairly judge and are basing their knowledge very clearly on your own very obvious biases. And no, Dusk didn't start it. You're reading vitriol from her when there very clearly is none to anyone else judging by the upvotes on the comments.
Your comments have lacked in any basic respect of language or conversation and you have made the comment section into a warzone through this.You have strewn vitriol and ad-hominem in a conversation that called for none, with no justification other than someone corrected you. And no, being compared to Keb is not justification because, if we use the upvote system again, it is clear that the comment is perhaps accurate.
Shape up, you're doing everyone an injustice here in trying to protect your clearly perverted ego.
Re-read your own post and then decide who's spilling vitriol and ad-hominem attacks here.
Calling someone out for their own mis-statements, outright lies and deliberate eschewing of facts is not vitriol, and is not an ad-hominem attack. It's calling out the truth. If you or Dusk can't handle the truth, or being called out on deliberate lies, that's not my problem.
Case in point: I have yet to read a single coherent answer to any of the technical questions I had originally asked in response to Dusk's intentional mis-statements of fact. Not from Dusk him/herself, and not from you.
So, unless you have something technical and valuable to add to this conversation, there's nothing here for you to see.
Girls, girls! Stop fighting. Here are SPARC M7 spec cpu 2006 rate results:
https://blogs.oracle.com/BestPerf/entry/201510_specpu2006_t7_1
The score is 1,200 for SPECint_rate2006 peak. And 832 for SPECfp_rate2006 peak.
For this new SPARC M12 fujitsu, two of them cpus achieve 1501 SPECfp_rate2006. Which means one of them should achieve 750 SPECfp_rate2006. This means that SPARC M7 still looks to be the fastest cpu, even considering this new SPARC cpu. Anyway, it seems that SPARC is the top holder in SPECcpu2006, no matter if from Oracle or Fujitsu.
I read that IBM projects that POWER9 will be 2x as fast as POWER8, which means that POWER9 might be able to compete with SPARC M7. As of now, two POWER8 cpus are slower than one SPARC M7 in three out of four SPECcpu 2006 benchmarks - if you can trust the link above. For database benchmarks, SPARC M7 is not 2x as fast, but up to 15x faster than POWER8.
SpecCPU claims Confirmed and posted on SPEC.ORG site!
The Claim for World’s highest "per CPU core" performance was explained on original press release ( https://www.oracle.com/corporate/pressrelease/oracle-and-fujitsu-announce-sparc-m12-servers-040417.html ):
> World’s highest per CPU core performance
> Comparison based on registered results per core in the SPECint_rate2006 and SPECfp_rate2006 benchmark tests.
> SPECint_rate2006 performance results and measurement environment:
> Fujitsu SPARC M12-2S
> Performance result (peak): 102 per CPU core
> Measurement environment: SPARC64 XII (4.25GHz) x1 core, Oracle Solaris 11.3, Version 12.6 of Oracle Developer Studio
The SPECint_rate2006 performance results are NOW (posted on April 20th 2017) published on SPEC.ORG (here: https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46856.html
Any major HW vendor will do their best to play around with the numbers and words to show them on top of the list but they will never tell a lie. Fujitsu/Oracle are in fact on top of the list of SPECint®_rate2006 / core (per core) performance with a value of 102 / core. Other Oracle Fujitsu M12 SPARC results below:
CINT2006 Rates
Hardware Vendor System # Cores # Chips # Cores Per Chip Base Copies Result Baseline Published Disclosure
Fujitsu Fujitsu SPARC M12-2 24 2 12 192 1770 1520 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46858.html
Fujitsu Fujitsu SPARC M12-2S 1 1 1 8 102 88.1 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46856.html
Fujitsu Fujitsu SPARC M12-2S 64 8 8 512 5600 4900 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46844.html
Fujitsu Fujitsu SPARC M12-2S 32 4 8 256 2810 2440 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46848.html
Fujitsu Fujitsu SPARC M12-2S 8 1 8 64 706 612 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46854.html
Fujitsu Fujitsu SPARC M12-2S 192 16 12 1536 14600 12800 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46840.html
Fujitsu Fujitsu SPARC M12-2S 96 8 12 768 7480 6490 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46842.html
Fujitsu Fujitsu SPARC M12-2S 48 4 12 384 3790 3270 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46846.html
Fujitsu Fujitsu SPARC M12-2S 24 2 12 192 1910 1630 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46850.html
Fujitsu Fujitsu SPARC M12-2S 12 1 12 96 956 819 Apr-2017 HTML https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170331-46852.html
We can use that steam to run a small steam engine to then power the system. I'm sure we can get some investor interested in a startup for self powering servers. There have been dumber ideas funded and not everyone believes in the laws of physics, though ignorance of the law doesn't mean it doesn't apply to you :)