back to article Oracle revs up Sparc M6 chip for seriously big iron

The new Sparc M6 processor, unveiled at the Hot Chips conference at Stanford University this week, makes a bold statement about Oracle's intent to invest in very serious big iron and go after Big Blue in a big way. Oracle's move is something that the systems market desperately needs, particularly for customers where large main …

COMMENTS

This topic is closed for new posts.
  1. gizmo23

    Clock speed?

    Now I've seen it all. An article about a new processor chip and nowhere in it is the clock speed mentioned. So, if the chip speed isn't relevant, just the number of threads you can throw at a job, the new boxes built around this must be aimed at particular workloads. There's a limit to how much parallelisation you can do so it seems to me we're talking about crunching lots of data. In which case, how about the external interfaces? With this amount of CPU, 4 x 8Gb fibre ain't going to cut it. Anyone got any ideas as to just what kind of workload this is aimed at 'cos I'm struggling to imagine it.

    Where's the "my mind has just been boggled by these numbers" icon?

    1. Anonymous Coward
      Anonymous Coward

      Re: Clock speed?

      I suspect 'big databases' is the answer to your question. Having 1TB of system memory at a few-hop delay speed from any of a large number of core is going to be useful for querying massive amounts of data.

      Like all of our emails, for example...

    2. John Riddoch

      Re: Clock speed?

      I'm guessing clock speeds will come out after they've done some more testing - clock speed is really a function of how fast (and hot) you can drive the silicon without it making mistakes. While you could nominally drive the silicon at say 4GHz in perfect conditions, most likely you'll have a massive failure rate so you'd ramp speed down to 3.5GHz with a higher success rate, possibly with the successful 4GHz parts as your premium part.

      The real issue for a platform like this is scalability; all those threads potentially battling over cross calls, cache flushes etc can really kill performance for some workloads. Still, there are probably workloads well suited to such a monster system.

    3. No, I will not fix your computer
      Boffin

      Re: Clock speed?

      This is an Oracle chip, one would assume that it's for Oracle workloads?

      I "suspect" a few things;

      It's been specifically designed with databases in mind, and specifically "in memory" (SGA style) databases, looking at the DDR pipelines and how bixby links processors it looks tuned for this very thing, it's not (specifically) a number cruncher - but allows "critical thread" which is like a hardware verison of cpu pinning (which is very useful for database and log writer processes).

      It's socket compatible with the M5, allowing simple upgrades, this matches an old Sun idea whereby they brought out a tweaked version of the new chip generation.

      For the same reasons it will have entry-level compatible clocks, with the tweaks being effeciency, usually between 50-100% (i.e. potentially doubling throughput).

      Some redesigns will be required for faster clocks, possibly simple firmware upgrades, possibly more significant such as board revisions, like the ability to upgrade 880's to 890's

      The internal tweaking is only half the story, a new chip which has all the tweaks and then some will be announced, possibly similar to the Fujitsu Sparc X+ (for a large virtulisation machine?)

  2. Gene Cash Silver badge
    Coat

    Singin' dat song...

    I got 96 sockets but an Intel ain't one?

  3. Nate Amsden

    256 socket Xeon

    http://www.sgi.com/products/servers/uv/configs.html

    2k cores (4k threads) & 64TB memory to boot.

    Not sure what would need that much but...it's been shipping for a while. Not likely going to be supported to run Oracle DB.

    But for HPC type stuff...which seem to be the things that want large amounts of memory, other workloads seem to do very well with SSD instead of massive memory.

    1. Kebabbert

      Re: 256 socket Xeon

      The SGI server is a pure cluster used for parallel HPC work, just study the case studies.

      http://www.sgi.com/products/servers/uv/resources.html

      The benchmarks are all parallel workloads, for instance SPECjbb2005 benchmark, it says "32 blades" in the picture. Those 32 blades are connected via a 7.5GB/sec NUMA link, which does not provide too much bandwidth, so it does not scale too well. I wonder how many hops there are worst case in this NUMA link? Latency could be quite bad, slowing everything down.

      http://www.sgi.com/products/servers/uv/benchmarks.html

      Here is another Linux cluster, the ScaleMP server. It too has 1000s of cores. And it use a single image Linux kernel. It uses a software hypervisor that tricks Linux kernel into believing it is running on a SMP server, not a HPC cluster:

      http://www.theregister.co.uk/2011/09/20/scalemp_supports_amd_opterons/

      I dont expect these Linux HPC clusters to run SMP workloads, like the Oracle server can do. Ive yet to find a 32 cpu Linux SMP server for sale, there are no Linux SMP servers for sale with 32 sockets. And has never been. But there are large 2048 core Linux clusters for sale, as these HPC servers show. But no 32 cpus.

      1. Macka
        Thumb Down

        Re: 256 socket Xeon

        It's not a cluster, it's a single system image (SSI) server. i.e. it is running just one copy of the Linux kernel. Read the overview properly.

        http://www.sgi.com/products/servers/uv/index.html

        You also need to learn the difference between NUMA and SMP. You won't find anything anywhere that's SMP at 32 sockets, they are all ( various flavours of ) NUMA. Current SMP designs hit a brick wall at 8 sockets and the M6 is no exception, as this article clearly states.

        1. forum24

          Re: 256 socket Xeon

          Not true for IBM's Power Architecture which has 32-way capable SMP POWER CPU's available since years.

          http://www.ibm.com/systems/power/hardware/795/

          http://en.wikipedia.org/wiki/POWER7#Specifications

          1. Macka

            Re: 256 socket Xeon

            I have never worked with IBM kit so had to quickly learn what this architecture looked like.

            http://www.redbooks.ibm.com/redpapers/pdfs/redp4640.pdf

            That's not SMP. Processors only have direct access to local memory and I/O within their own "Processor Book". The books are then linked together using an "SMP Fabric". SMP in this context is a creative marketing phrase. That design, if you ignore the proprietary language and marketing, looks just like NUMA.

            You won't find 32 processor systems that are true SMP because true SMP doesn't scale to that many processors without hitting serious performance bottlenecks. Read the "basic concept" section in this wiki. It explains the problem NUMA was designed to solve very well.

            http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access

        2. Kebabbert

          Re: 256 socket Xeon

          "...You also need to learn the difference between NUMA and SMP. You won't find anything anywhere that's SMP at 32 sockets, they are all ( various flavours of ) NUMA. Current SMP designs hit a brick wall at 8 sockets and the M6 is no exception, as this article clearly states...."

          I agree that the Oracle T5 and M6 SMP servers are up to 8 sockets. I have never said anything else. The M6 server will glue several SMP servers via NUMA, yes.

          But earlier SPARC servers was also SMP, for instance the Sun M9000 with 64 cpus. So, yes, I know the difference between SMP and NUMA, I am talking about it, am I not?

          1. Macka

            Re: 256 socket Xeon

            "...But earlier SPARC servers was also SMP, for instance the Sun M9000 with 64 cpus. So, yes, I know the difference between SMP and NUMA, I am talking about it, am I not?..."

            Yes you're talking about it, but no I'm afraid you don't know the difference between SMP and NUMA. Lets drill a bit deeper into your example, the M9000.

            http://www.oracle.com/technetwork/articles/systems-hardware-architecture/m-seriesarchitecture-163844.pdf

            Actually, we can start with the diagram on page 22 of the M5000, and the following sentence that says:

            "SPARC Enterprise M8000 and M9000 servers feature multiple system boards that connect to a common crossbar."

            If you have a design where sockets on a system board only have access to limited local memory, and must traverse an interconnect, like a crossbar, to access memory on another system board, then that is a NUMA, or NUMA derived design. It's most certainly not SMP. An SMP design is where all CPUs have equal access to to all memory. The problem with that is it doesn't scale well, hence the reason why NUMA was invented.

            1. Kebabbert

              Re: 256 socket Xeon

              "....Yes you're talking about it, but no I'm afraid you don't know the difference between SMP and NUMA. Lets drill a bit deeper into your example, the M9000....Actually, we can start with the diagram on page 22 of the M5000, and the following sentence that says: "SPARC Enterprise M8000 and M9000 servers feature multiple system boards that connect to a common crossbar."

              If you have a design where sockets on a system board only have access to limited local memory, and must traverse an interconnect, like a crossbar, to access memory on another system board, then that is a NUMA, or NUMA derived design. It's most certainly not SMP. An SMP design is where all CPUs have equal access to to all memory. The problem with that is it doesn't scale well, hence the reason why NUMA was invented...."

              Yes, I do know all this. I was the one talking about NUMA and SMP, wasnt I? It seems you claim no 32 SMP servers do exist. If that is true, then maybe you accept that no Linux 32 cpu SMP servers exist. So again I am correct: there are no 32 cpu linux SMP servers.

              The M9000 is not a true SMP, I know. But Sun worked hard to make it act like SMP. This manifests in that memory latency is quite bad on the M9000, but the latency is not that catastrophically bad. The latency is quite tight, with a small spread between best case and worst case. A true SMP server would have no difference, there would be no best case nor worst case latency. So, in effect the M9000 server is SMP.

              If we look at a true NUMA system, such as the 8192 core Linux ScaleMP server with 64TB RAM. This server is a cluster running a single image of Linux. And like all clusters it has a very wide spread between best case and worst case latency:

              http://forums.theregister.co.uk/forum/1/2011/09/20/scalemp_supports_amd_opterons/

              "...I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency..."

              So it does not really matter if a server is a mix of NUMA and SMP, if the latency is good (because the server is well designed). If a NUMA server had extremely good latency, it would for all intents and purposes act as a SMP server, and could be used for SMP workloads.

              -The Sun M9000 has 500ns as worst case latency. And best case... maybe(?) 200ns or so. The M9000 did 2-3 hops in worst case, which is not that bad, you dont have to consider it as a problem when programming. In effect, it behaves as a SMP server.

              -A typical Linux NUMA cluster has worst case... something like 10.000ns or even worse. The worst case numbers were really hilarious, and made you jump in your chair (was it even 70.000ns? I dont remember but it was really bad, the worst case numbers were representative for a typical cluster). In effect you can not program a NUMA cluster like it is SMP, you need to program differently. If you assume the data will be quickly accessed, and the data is far off in a Linux cluster, your program will grind to a halt. You need to allocate data to close nodes, just like cluster programming. And if you look at the use cases and all benchmarks on all Linux NUMA servers, they are all cluster HPC workloads. No one is used for SMP work.

              This Oracle M6 server is an island of SMP servers, connected with NUMA connection. I am convinced Oracle is building on the decades of experience from the Sun server people, so the M6 server has very small difference between best and worst case latency. It will act like a SMP server, because databases are typical SMP workloads, and Oracle cares strongly about database servers. The Oracle M6 server will be heavily optimized to make sure you dont have to make more than 2-3 hops to access any memory cell in the entire 96TB RAM server - it acts like a SMP server fine for databases and other SMP workloads.

              I suggest you study the RAM latency numbers for M9000 and for all Linux NUMA clusters. The differences are huge. 500ns worst case, vs 10.000s ns or was it 20.000ns?? One can be programmed like a SMP server, the other needs to be programmed as a cluster.

              So, you are wrong again.

      2. forum24

        Re: 256 socket Xeon

        No 32-way SMP Linux Server? Maybe not from Intel or Oracle but here's your Server from IBM (and that's no new Server btw.)

        http://www.ibm.com/systems/power/hardware/795/

        1. Kebabbert

          Re: 256 socket Xeon

          "...No 32-way SMP Linux Server? Maybe not from Intel or Oracle but here's your Server from IBM (and that's no new Server btw.)..."

          No, the IBM P795 is an AIX server. IBM or someone else, might have compiled Linux for it. But the P795 is an IBM AIX server. I doubt anyone of those huge extremely expensive P795 are using Linux. If you want Linux, you can get a cheap 8 socket server. No need to spend truckloads of money. Or you get a Linux cluster with 1000s of cores such as the SGI NUMA server.

          There are no vendor designing and selling Linux 32 way SMP servers, there never has been such a big Linux server for sale. Compiling Linux to this huge 96 socket M6 server, does not make it a Linux server. No one sane would run Linux on it. It is designed to work with Solaris, which can handle the extreme scalability with almost 10.000 threads. Oracle sells Linux, but I doubt they will offer Linux on this huge M6 server.

          HP compiled Linux for their... Superdome(?) or was it Integrity(?) server with 64 sockets and called it "Linux Big Tux". But Linux scaled awfully bad on 64 cpus. Later when HP started to offer Linux on that HP-UX server, it could only run Linux in a partition, and the largest Linux partition was... 16 cpus I think. Or was it 8 cpus? I dont get it. Sell a 64 socket server, and only offer a Linux partition with up to 16 cpus. If Linux could handle 64 cpus, it would have been offered. But probably larger partitions than 16 cpus would be too troublesome with too high a support cost. But still, the Superdome/Integirty is designed for HP-UX and is a HP-UX server. Linux on it, is an after thought and does not work well.

          I wonder if the IBM P795 also runs Linux in partitions of up to 16 cpus, and no larger? Wouldn't surprise me.

          BTW, the SGI Altix and SU servers are clusters. They are using NUMA. And NUMA is regarded as a cluster. NUMA is the same thing as Cluster. Read here if you dont believe me:

          http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access#NUMA_vs._cluster_computing

          1. Justicesays
            Boffin

            Re: 256 socket Xeon

            "I wonder if the IBM P795 also runs Linux in partitions of up to 16 cpus, and no larger? Wouldn't surprise me."

            A look at the redbook ( http://www.redbooks.ibm.com/redpapers/pdfs/redp4640.pdf ) for the P795 says:

            Scale to 256 cores/1024 threads - for the RedHat 6.3 and SLES 11 installations

            Not sure what the RH 5.8 and SLES 10 go up to, they are only 2-way SMT capable , so that is going to cut down your threads by 2 for starters .

          2. Macka

            Re: 256 socket Xeon

            "...But Linux scaled awfully bad on 64 cpus...."

            That used to be the case several years ago, but is not the case today. That's old FUD.

            "...BTW, the SGI Altix and SU servers are clusters. They are using NUMA. And NUMA is regarded as a cluster. NUMA is the same thing as Cluster. Read here if you dont believe me:..."

            No it's not. When the author of that article says "One can view NUMA as a tightly coupled form of cluster computing" he's making an analogy. He's not saying the two are the same. In a cluster, each system runs its own instance of an operating system and can boot / shutdown separately to the others. That's the litmus test.

            1. Kebabbert

              Re: 256 socket Xeon

              >>"...But Linux scaled awfully bad on 64 cpus...."

              >"That used to be the case several years ago, but is not the case today. That's old FUD."

              Well, it was true a few years ago that Linux scaled awfully bad on the 64 cpu HP Itanium Superdome (or was it Integrity). Linux had something like ~40% cpu utilization on the HP-UX "Big Tux" server in official benchmarks from HP. So it was not FUD back then, it was true. FUD is basically: a negative lie with the purpose to thrash talk a product. But the bad scaling of Linux never was a lie, it was a fact. And facts are not FUD. You seem to believe that negative critisicism is FUD, but it is not. If the critiscism is true, and negative - it is relevant for the discussion.

              You claim that Linux bad scaling was not bad, it was in fact good, but nobody would call ~40% cpu utilization good. So why are you trying to make it look like Linux scaled well back in the days? Linux never did. Why are you FUDing about this? I would never make up things like you seem to do.

              And you claim that Linux scales well today. That bad scaling was old FUD. Well, do you have any benchmarks on 32 or 64 cpu Linux servers, that proves Linux scales better today? No you have not. Because there are no 32 cpu Linux servers for sale. Sure, you can compile Linux to the HP server, or to the SPARC server (Linux compiles for SPARC), but those are HP-UX and Solaris servers. And I also suspect that Linux scaling would be way bad on those. Solaris is fine tuned to scale well on this 96-socket monster. Linux is not.

              So, you claim that Linux scaled well back in the days, because the rumours of "bad scaling was FUD". But Linux did not scale well back in the days, according to official HP benchmarks. This was true, and it is still true today. Why do you lie about this?

              You now also claim that Linux scales well today, how do you know this? On what do you base your wishful thinking? There are no 32 cpu Linux servers to benchmark, so no benchmark does exist. Are you lying about this, too?

  4. Anonymous Coward
    Anonymous Coward

    Bandwidth != LAtency

    "to get around the obvious delays from hopping, Oracle has over-provisioned the Bixby switches so they have lots of bandwidth."

    That's like saying that because cars can only go at 70mph the government has built eight lane motorways, The number of lanes in the motorway doesn't alter the time it takes you to get to your destination.

    Similarly Increasing bandwidth doesn't improve latency (at least not once you have enough bandwidth, so don't have traffic jams caused by capacity limits in the motorway analogy).

    As the old saying has it "Bandwidth is easy (just add more pins), but latency is hard".

    1. Bronek Kozicki
      Pint

      Re: Bandwidth != LAtency

      Keeping the number of hops low helps to maintain the low latency. If you look at the picture carefully you will find that all CPUs can connect to others directly (7 cores), via single BX (12 cores) or a BX and a CPU (i.e. single hop, remaining 12 cores). This all with 4Tb/s bandwidth to maintain cache coherency across sockets - I think that's some really nice engineering.

      Well of course how it performs in practice ... I'd like to learn it too :) Over beer, if not using it in person :)

      1. Skoorb

        Re: Bandwidth != LAtency

        I wonder if the compiler / kernel will be able to attempt to 'intelligently' allocate or shift threads to cores where the other threads that need to 'talk' to the first one are a small number of hops away.

        i.e. thread A keeps shoving stuff at thread B, so move A to a core one hop from B.

        Whilst very complicated, I wonder if it would be worth the effort.

        1. Kebabbert

          Re: Bandwidth != LAtency

          "...I wonder if the compiler / kernel will be able to attempt to 'intelligently' allocate or shift threads to cores where the other threads that need to 'talk' to the first one are a small number of hops away...."

          I suspect it is not really necessary. On a NUMA cluster you must design your program like that, because worst case latency might be 10.000 ns or more. But on this M6 server, the worst case is only 2-3 hops away, which makes for a good latency. So, you just program this server as a true SMP server, just like normal programming, just copy your binaries to this server and off you go. You dont need to recompile and redesign to make sure your data is in close nodes. No need for this server. Just treat it like a true SMP server, because of the good design.

          But on the other hand, if you want to port your Linux applications to a NUMA cluster such as SGI Altix servers, you must redesign the programs, and rewrite them. Otherwise performance will grind to halt, if you do not make sure that the data is located in close nodes. In worst case, it is almost like accessing a hard disk, because the nodes are so far away before you access the memory you need. So these Linux NUMA clusters are directed to HPC parallel workloads, just check the use cases and the benchmarks. They are all HPC cluster stuff. No SMP stuff.

  5. asdf

    sales drones on here aside

    The tech specs are impressive and everything but when is SPARC going to actually make money? As far as I can tell Oracle is now a few billion in the hole due to trying to make a go of it with SPARC. Larry can only squeeze the existing customer base so hard before it starts to decline. As Trevor says Oracle doesn't have customers, it has hostages.

    1. Kebabbert

      Re: sales drones on here aside

      I explained this three years ago, and write it again. Sun had 30.000 customers. Oracle has 340.000 customers. Enterprise customers. If Oracle can make a small fraction of them switch to SPARC and Solaris, then SPARC will be more wide spread than ever under Sun.

      And Oracle is working hard to make Oracle Database run best on Solaris. The TPC-C world record for a single system is 8.3 million tmpc on a 8-socket SPARC T5 server. That is only on 8 sockets. Imagine running the Oracle database on this M6 monster, biggest and baddest on the market. With 96TB RAM and compressed RAM, it will give extreme performance. And Oracle will fine tune it to run databases.

      Solaris had recently a tuning so it decreases latency of the Oracle DB with 17%. Oracle controls everything, hardware and the OS and Java and the Database, the entire stack, so Oracle can fine tune the entire system to run databases faster than anyone else. Certainly faster than Linux on 8-socket servers. And this control will continue to show in even better database performance. Oracle has only started.

      No, I am not affiliated in any way with Oracle. I just happen to like good tech, I am a tech geek. If IBM had better tech, I would like that too. I liked the POWER7 ealier, it was really good back in the days.

      1. asdf

        Re: sales drones on here aside

        > If Oracle can make a small fraction of them switch to SPARC and Solaris, then SPARC will be more wide spread than ever under Sun.

        Wow somebody doesn't understand the market worth a damn. The total market for proprietary UNIX boxes is less today than SUNs SPARC sales in the late dot com era by a significant amount. The whole segment is disappearing and nobody faster than SPARC. The tech is neat but businesses are a lot more into outsourcing to the cloud and commodity x86 servers in data centers these days. HPC might be the only market this segment will have in two years. IBM will keep making chips due to game console sales helping them with volume but I don't see anybody else (tradition UNIX non x86 market) being in the game in the three years unless the market suddenly changes.

        1. Kebabbert
          Happy

          Re: sales drones on here aside

          "...Wow somebody doesn't understand the market worth a damn. The total market for proprietary UNIX boxes is less today than SUNs SPARC sales in the late dot com era by a significant amount...."

          I agree on Unix is diminishing. But i distinguish between Unix servers (where Linux is getting in, however only for low end servers because there exist no high end Linux servers for sale), and database servers. Oracle is starting to create extremely fast database servers, that happen to run Solaris and SPARC. And all Oracle Enterprise customers running huge Oracle databases, will be very interested in the fast Oracle database servers. They might not be interested in SPARC, but they are interested in the database servers (which happens to use SPARC and Solaris). So, if a tiny fraction of all database customers wants to get better database servers, they must switch to the database servers from Oracle (using SPARC and Solaris).

          :)

  6. Billl
    Trollface

    re: Wow somebody doesn't understand the market worth a damn.

    I don't mind calling someone out, but you gotta know what you're talking about when you do it.

    No one is using Power/Cell in their new game consoles. That will not keep IBM making chips. What will keep IBM making chips is the Billions of dollars they get from large corporations and governments that still rely on RISC/Unix. Not just from the initial sell, but from the add on services and solutions.

    Though I disagree with TPM on his comment about week cores, he seems to get it in relation to high end systems. The industry is consolidating. HP has given up at the high end. Oracle, IBM and Fujitsu are the only ones that seem to care about high-end computing. As long as there is demand (which there still is - the bleeding is leveling out) then IBM/Oracle/Fujitsu will still make money. Personally, I don't see Fujitsu doing SPARC in 10 years, but who knows? I'm not talking HPC here. There's very little profits there -- just ask SGI. HPC is about advertising, not profits.

    1. asdf

      Re: re: Wow somebody doesn't understand the market worth a damn.

      >No one is using Power/Cell in their new game consoles.

      Wii U CPU - Tri-Core IBM PowerPC "Espresso"

      Granted its not selling worth a damn but the volume of consoles has done a lot to help IBM justify making chips. In the future we will see.

      I understand HPC is not where the profits are but they are one use I see not disappearing any time soon. Deny all you want but sales of non x86 proprietary Unix boxes are dying on their ass.

      1. Down not across

        Re: re: Wow somebody doesn't understand the market worth a damn.

        (or perhaps I should've followed suite and titled "Wow somebody doesn't understand large enterprises"

        "Deny all you want but sales of non x86 proprietary Unix boxes are dying on their ass."

        Deny it all you want but despite declining sales for low/mid range non-x86 boxen, the market for the large high-end ones is not going anywhere. The demand is likely to remain if not even increase (and not just for the TLA organisations). It would not be unlikely at all that these kind of beasts would be in the shopping list for Fortune 100.

      2. Billl
        Happy

        Re: re: Wow somebody doesn't understand the market worth a damn.

        "Wii U CPU - Tri-Core IBM PowerPC "Espresso""

        Thanks for that. I was not aware of what chip Nintendo was using. Well, Nintendo has never seemed to care as much about pure performance in the past, so if they got a good deal from IBM, I guess it makes sense. Sony and Microsoft went the obvious route, both abandoning Power.

        I still don't see this relatively small (money-wise) portion of the market helping IBM. IBM is pushing hard to get others to use or even copy their chips. I think it is more to expand the use of Power versus trying to make money directly. Oracle (Sun) has been doing that for years with varying levels of success. Fujitsu is the most recent example to use the SPARC chips for their systems. From sparc.org there are some more examples.

        "Deny all you want but sales of non x86 proprietary Unix boxes are dying on their ass."

        Not denying, just a long time observer of the market. RISC was dying 10 years ago, but then the market for all chips exploded, taking RISC with it. Unix/RISC are still much more trusted than Linux/Intel.

        Also, I know you probably understand the difference, but x86 is proprietary -- AMD had to reverse engineer it. SPARC and Power are actually open -- one of them more open than the other. Which is why we see Fujitsu working so closely with Oracle on SPARC. So your comment about "proprietary Unix boxes" is misguided at best. x86 is a Proprietary "industry standard", and not an Open standard by any accounts. That said, there is a chip that is nipping at the heels of x86 and is really giving Intel heartburn on the low end... that chip is ARM -- and it's RISC! So to say one platform has won is premature. This show is just getting interesting.

  7. chacha

    Bull bullion

    http://deinoscloud.wordpress.com/category/performance/

    "The bullion server developed by Bull has been ranked as the world’s fastest X86 enterprise-class server, according to the international SPECint®_rate2006[1] benchmark. The benchmark – which was run on a ‘fat’, 16-socket configuration – highlights bullion’s exceptional characteristics, which make it almost twice as powerful as all its rivals. Featuring 160 Intel® Xeon® E7 cores and 4 Terabytes of RAM, the bullion server achieved peak performance of 4,110 according to the SPECint®2006 benchmark. The fastest competitive system only managed a performance of 2,180."

This topic is closed for new posts.

Other stories you might like