Long time coming....
Everyone knew Oracle was going to pull the plug on SPARC64, it was just a matter of time. This should be the end of the M-Series. Who wants to recompile and re-test to, at the end of the day, still be on SPARC.
Oracle and Fujitsu may be partners when it comes to Solaris, but they are going their own separate ways when it comes to processor and system development. A year ago, a senior Oracle bod hinted to El Reg that the new Sparc M4 processor will be produced by Oracle rather than Fujitsu, which ordinarily builds the M-series of …
".....Who wants to recompile and re-test....." Well, Fudgeitso was very clever with the original SPARC64 and you could run UltraSPARC Slowaris apps on it without the need to port anything, and usually a lot faster for less money. It will be interesting to see which way the market jumps seeing as the Fudgeitso option will probably have better single-threaded performance than anything based on T-series cores. Oh, and the Fudgeitso is based on real sillicon, whilst the Snoreacle effort is just vapourware.....
How in the deep pits of hell are 192 S3 cores going to provide x6 throughput of 128 SPARC64-VII+ cores ?
So we are talking around 9000 in specfprate2006
We are talking 1000+ in specintrate2006
We are talking 7000+ in SPEC OMPL2001
(I know this is the 2.52Ghz SPARC64 VI)
Basically beating every Intel Xeon and IBM POWER7+ on a per core basis. Yeaaaahhhhh right.
What we most likely is going to see is a in memory database benchmark compared to a traditional M9000 benchmark that is much closer to a rl system.
Again lies, damn lies......
Jesper asks, "How... are 192 S3 cores going to provide x6 throughput of 128 SPARC64-VII+ cores?"
I think this is a very interesting question... how does one get to 600% throughput increase?
- 4 cores per socket
- 2 thread per core
- 8 threads per socket
M? S3 Cores
- 6 cores per socket (conservative)
- 8 threads per core (normative)
- 48 threads per socket
An uneducated & cursory look indicates Jesper is really asking the wrong question.
A socket swap swap results in 6x thread increase.
The question which Jesper SHOULD have asked is, "How can EACH S3 thread be engineered to perform on-par with a SPARC VII+ thread?"
This is a not a difficult stretch, considering what has been demonstrated with the SPARC T4 processor.
Amazing... sorry I can't resist.. what a load of .... You normally are kind of realistic.
David... if you state that your current hardware has 6 times the throughput of your previous generation, that doesn't mean that for one single application, using a bag of tricks on the new hardware, and not using these same tricks on the old hardware.
6 times the throughput means generally .. not for Only Running Oracle's Own Special selected apps, with a ton of addon hardware, that moves DB load to the storage system ......
Come on be serious.
Jesper posts, "if you state that your current hardware has 6 times the throughput of your previous generation, that doesn't mean that for one single application"
ZFS on Solaris on SPARC (M4 & T5) are not "one trick ponies" like the T1. Did you follow the link?
The need to move the RDBMS to the storage system is not required. Run ZFS, add a PCIe write cache, add a PCIe read cache, apply ZFS with hardware compression for I/O throughput , and ANY database will get massive read, write, and I/O acceleration benefits. Other applications will see the benefit, too.
The question really is: can each SPARC M4/T5 thread be engineered for equivalent throughput to SPARC VII+ thread?
Oracle demonstrated 6x per-thread improvement with the T4. Yes, it seems possible with M4.
Will Oracle drive "8+" sockets with the M4 processor? I doubt Oracle will dash high-margin "8+" socket machines to the rocks, so once again, it seems possible.
What technically do you see as the problem?
David. You do understand that thread priority, does not mean that all threads on a core executes x5 times better (using the Oracle numbers for how much faster the T4 is capable of running a single thread compared to the T3).
But what Oracle presented at hotchip about the T4 also stated that the total throughput of the T4 chip was about the same as the T3.
With this M4 chip they are removing 25% of the cores and adding 20% of clockspeed, as well as a shitload of cache. So for many applications the throughput of the M4 will be roughly the same as the T3.
Hence for a workload like specintrate2006 the new machine will be lucky if it hits 6000. Which is far far from the 10-11K that it should have been if we are talking x6 of a M9000-32.
Now Oracle have already backtracked on the fact that all comparisons are now done against the M9000-32 and not just the M9000. Or to put it in other words, the new machine is not going to be a 64 socket machine only a 32 socket machine.
Now if you can't see through the marketing bull... then it's your problem.. or the company you work for.
".....The need to move the RDBMS to the storage system is not required. Run ZFS....." Oops! Have they fixed the clustering issues with ZFS then? Because if not, you just put your enterprise data on a SPOF. Out on storage it is highly-available, on a ZFS stack it is not. And that's when it is up and in use - what about when you have to take the ZFS filesystem offline to "scrub" it? What, you want to scrub it online and watch the performance nose-dive? Oh, and don't Oracle recommend you scrub all ZFS filesystems AT LEAST once a month? No thanks, I'll stick with a proper filesystem and use maybe an SSD-based array.
Matt Bryant - "Oh, and don't Oracle recommend you scrub all ZFS filesystems AT LEAST once a month?"
Matt Bryant - "No thanks, I'll stick with a proper filesystem and use maybe an SSD-based array"
And you will take your chances and experience silent corruption.
Take a mirrored pair. Disk 1 experiences unrecoverable bit rot. Disk 2 crashes. All you have left is corrupted data.
ZFS scrub eliminates that, as long as the scrub happens before the unrecoverable bit rot and before Disk 2 crashes. The monthly scrub severely reduces this likelihood.
Having observed corruption on an AIX system (on IBM storage), SPARC system (running UFS on EMC), Windows (on EMC) [and 2 other systems] over the past 9 months - I am more interested in ZFS with scrubbing and proper protection than any time before.
Big data centers are different creatures than small datacenters.
".....And you will take your chances and experience silent corruption....." Which only exists in the fantasyland of Sun FUD. I have NEVER seen a case like that in over forty years of UNIX, and that includes educational systems that had been running without a reboot or fsck for seven years. And a "scrub" is just an fsck in drag, nothing new at all. Quit drinking the Slowaris koolaid.
Come On Matt, You know that running your disk scrubbing on the Server Processors is much better than doing it internally on the disk systems. It's not like you need that bandwidth and processor resources to run, things like for example a Database. And well the extra Oracle licenses needed cause you have to do storage housekeeping on your actual server hardware are cheap on SPARC.
The only place where I have heard of this silent corruption(note: not corruption that is detected) is the Cern study where they put their data on a crap disk system which were deliberately fed with bad micro code and deliberately failed to do any sane configuration. I would probably do something about that first before deploying ZFS which according to many reports, has its own ways of crashing, burning and making data irrecoverable.
And according to Oracle there is no support for importing your volumes on the "other side" after a SAN mirror/split. That is a show stopper for any serious business.
".....http://queue.acm.org/detail.cfm?id=1866298...." Written by an ex-Sun engineer. How unbiased! And working on a Sun project at Stanford when he wrote the report. I'm sure that had no bearing on his opinions and findings whatsoever.
Written by an Ex-Sun person, but referencing studies done by others that were not from Sun. Nice try Matt. Your FUDing abilities have improved.
So you're a denier then, huh? You seriously think that bit-rot does not exist? Okay. Good for you and your company. ZFS may not be the only answer, though btrfs seems to think it's important, as does Symantec (Volume Manager) as they have been attempting to also solve this problem.
Perhaps a CERN Study on Data Integrity is better for you?
Do you seriously not think that bits flipping under the cover of your RAID is not a potentially huge problem?
I do not deny bit-rot can happen, just that it is not the massive issue the ZFS-pushers make it out to be, otherwise we would be seeing it at least weekly if not daily in large environments, and we simply don't. It's like those reports that go on about the damage that particles from outer space can do if they hit your array as they could bore right through the platter and flip DOZENS of bits in one go! Of course, the whole particle impact idea is statistically so unlikely as to make protecting against it ridiculous.
Anyway, if Symantec and the BTRFS team are already working on the problem then I'd rather wait for their better implementations seeing as both do other features better too. Way to go with the whole shooting yourself in both feet idea. And you can cluster those options!
The question is not whether bits in various places can be flipped or not. There are various errors occurring all the time in large systems.
The real question where and what do you do about it, and what is the penalty for doing that.
I think the best comment I've seen on this is one from Linus Thorvalds. He basically says, and I agree, that integrity of data is best done as low as possible in the solution stack and/or as high as possible.
And a filesystem is somewhere in between.
Cause at the low levels, typically on the disk system, you don't use server resources on it, and in the top of the solution stack you have as much semantic knowledge as possible.
Makes sense to me.
".....This is a not a difficult stretch...." Maybe on tasks such as webserving, where lots of tiny little threads can be run in parallel. But when you try running real enterprise apps, with a heavy reliance on single-threads, like the ones being run on M-series by customers, then your T-series will choke. Same as they have alwasy done, and same as they will do for at least the next few generations of vapourware. Meanwhile, SPARC64 is just dandy at chewing through single-threaded apps, and has a prior history of doing so better than Sun's own UltraSPARC chips. And we all know what happened to the Great White Hope of UltraSPARC, SPARC V aka The Rock, or do you need reminding, Mr Novatose? After years of FUDing SPARC64, the same Sun salesteams had to swallow their pride and tell us customers that Sun servers with SPARC64 chips were just goshdarnlovley! If I had to choose between the two I'd put my money on Fudgeitso if only because they have previously done a better job with SPARC64 and don't have a habit of making vapourware compared to Sun. Why on Earth Larry has decided to spit in Fudgeitso's face is beyond me, but I'm beginnign to suspect he is coming apart at the seams.
@TPM, do you think having Fujitsu and Oracle to supply SPARC machines would be better for customers like it was during past Fujitsu / Sun competition? Do you think Oracle would be willing to loose margins on a sale competing against Fujitsu HW, which would need to buy Solaris licenses from them anyway?
Mr. Ellison doesn't hide he wants to have margins like Apple's iPhone for Oracle's HW, thus on most customers Oracle sales force tries to primarily push Exadata, since it is a product with a BoM of about $500K (some x86 commodity servers with some infiband gear within a rack) and that can be sold for $10M... Even during UNIX golden times margins were not that high! Customers who keeps relying 100% of their systems on Oracle DB should be aware they are heading for 100% vendor lock-in, choice means lower profits to Oracle, thus tends to be eliminated!
This post has been deleted by its author
apleszko asks, "do you think having Fujitsu and Oracle to supply SPARC machines would be better for customers like it was during past Fujitsu / Sun competition?"
Fujitsu and Sun were always partners, always competitors. It is important to have 2 parts suppliers, to make sure parts are available in case of a catastrophic failure in one supplier (i.e. Sun had a few of them, in their SPARC lineup...) It was good for the customers to have Fujitsu to fall back on.)
apleszko asks, "Do you think Oracle would be willing to loose margins on a sale competing against Fujitsu HW, which would need to buy Solaris licenses from them anyway?"
You are asking a hypothetical questions, with little reasonable possibility of existing in the real world.
1) Oracle seems mostly interested in selling Oracle on Exadata. Database-in-a-Cloud is a big-thing right now.
2) Fujitsu is not targeting Database-in-a-Cloud customers, with Oracle embedded database machines.
3) Fujitsu also sells Linux on SPARC - Fujitsu could compete in embedded RDBMS without Solaris
Fujitsu is able to compete in SPARC ecosystem, without Oracle, a very interesting position to be in.
This is clearly a bonus to any customer... two viable hardware vendors, two viable OS vendors, two different hemispheres, one open CPU architecture based upon standards (where the vendors don't sue one another.)
If I was the military or government in any foreign nation, my choice would be clear.
David Halko, Oracle has a very different sales approach compared to Sun's, it won't be good for Fujitsu since they don't own Solaris, thus customers would not benefit anything at least when thinking about lower prices.
Sorry but Linux on SPARC is as good as Linux on POWER, Linux on Itanium or Linux on ARM variants... It is completely useless for enterprises since there is no binary compatibility with Linux on x86_64, where the developer ecosystem is. I could also mention Solaris on x86 as another example, nobody is using it for real for important applications (not even Oracle is releasing Exadata with Solaris x86, just Oracle Linux!!). For me, without Solaris, SPARC could only be useful for academic applications, not for business. And even for HPC systems, which usually don't worry about custom O.S. as long the performance is great, the trend seems to be for x86_64 and specially GPUs, since the price/performance is much better.
The questions are not hypothetical at all, that happen on the market all the time!! A recent example is Oracle position related to not supporting HP-UX anymore... That decision was clearly with the objective of taking a competitor out of the unix market. Considering this, wouldn't it be too naive believing they are in favor of keeping a fair competition with any other vendor?
apleszko, I prefer Solaris over Linux on SPARC - you are not the only one who prefers the binary compatibility.
Fujitsu seems to be taking more of an early Sun approach. Fujitsu's educational wins will foster a developer ecosystem on SPARC Linux. This is a good sign, since these investments will be in place for years with many thousands of students finding their familiarity with it.
This being said, it could be a different ecosystem in 8 more years.
Biting the hand that feeds IT © 1998–2022