back to article 'Til heftier engines come aboard, HP Moonshot only about clouds

The HP Moonshot hyperscale servers are not even fully launched, and Intel and Calxeda are already bickering about whose server node is going to be bigger and better when they both ship improved processors for the Moonshot chassis later this year. Other engines will be coming for the Moonshot machines, too, HP execs tell El Reg, …


This topic is closed for new posts.
  1. Paul Crawford Silver badge

    What, are they not also supporting the Itanium processor here?

    1. Dazed and Confused

      Heat is likely to be a problem

      With the fans they'd need to cool that many Itanium processors they'd probably manage a real life moon shot.

      1. Destroy All Monsters Silver badge

        Re: Heat is likely to be a problem

        I wonder whether that wouldn't be the case even with today's GPUs.

  2. J.Kleen

    Moonshot 1500 dimensions

    Although the Moonshot 1500 consumes 4.3U rack space (for just 180 servers); the rack rails delivered with its chassis are equal as used in the just announced HP SL4540 Gen8 . The rack rails are flexible in design and placement; as such when installing 3 of these Moonshot 1500's in a rack they will ONLY consume 13U Rackspace; or 39U for just 1620 servers. That leaves with a 42U rack still 3U available for ToR switches IF at all required.

  3. Anonymous Coward
    Anonymous Coward

    "You can come out with something at the speed of need"

    ...but ye canna change the laws of physics.

    That is, if you stick 450 Xeons in that much space, you're going to blow the power and heat limits of a standard rack out of the sky.

    Anyway, what happened to Seamicro who were doing all this 2-3 years ago?

    Looks like they were bought by AMD but are selling boxes with Intel processors?

    1. Anonymous Coward
      Anonymous Coward

      Re: "what happened to Seamicro?"

      SM10000-64HD puts 768 Atom cores in 10U. (76.8 / rack unit)

      Moonshot puts 90 of them in 4.3U. (20.9 / U)

      SeaMicro is 3.6x as dense. But not as adaptable to new processors as the HP 1500 chassis. HP would appear to have sacrificed density for flexibility. It's not clear the market will reward them for that.

  4. Mad Mike

    Power v Performance

    As a general rule, for any given generation of chips, power consumed is in direct proportion to the processing power delivered. Yes, an Atom uses less power, but then it delivers a lot less processing performance as well. So, at best, using less capable chips is simply removing the need to have hypervisors to partition up bigger processors to ensure efficient loading.

    All this stuff talks about x (general hundreds) of servers etc., but each is substantially less powerful than Xeon and Opteron x86 chips. So, are you really getting more processing power in a given space, or simply cutting it up without the need of a hypervisor? I don't really see the point of this for datacentre processing, as cutting up bigger servers with hypervisors allows much more processing power and workload to be supported than trying to use smaller servers. The flex the hypervisor gives allows different workloads to take the same processor at different times.

    It all rather looks like an attempt to fix a problem that has already been fixed. As people have said, what happened to Seamicro?

    Now, for laptops and other areas where power consumption can be traded down along with processing performance, this sort of thing makes sense, but that simply doesn't apply in the datacentre. Do HP really think they've come up with something no other manufacturer has thought of. Or, beaten them to the market?

    1. Alan Brown Silver badge

      Re: Power v Performance

      Usually power requirements scale up much faster than performance, so moreyou might more accurately say an atom delivers less processing performance at a lot less power.

      Not that X86 has ever been particularly power efficient in any iteration.

      As for datacentre use: Virtualisation immediately costs between 10-30% of available system resources before you even start to run anything on it and there are a bunch of reasons for wanting to run a bunch of low power machines, rather than virtualise 'em in one high power box (such as being able to shut 'em down entirely to save even more power). Even so, HP seems to have developed a solution looking for a problem.

      1. Mad Mike

        Re: Power v Performance

        I agree that power drain goes up faster than processing power, but the latest power efficient Opterons and Xeons are pretty low power and still pack a much bigger punch. If you go for the special editions, you'll always pay well over the odds in both power and money.

        Virtualisation shouldn't cost 10-30%. Yes, x86 virtualisation is more expensive than say PowerVM, but you should really be able to get it at 10% max. It all depends on configuration and how much care you take. The perceived cheapness of x86 systems often results in companies just deploying another server rather than running a more efficient deployment model.

        I agree there are reasons for running small low power servers rather than virtualise, but this is a small segment again. Even with virtualisation and current 'motion' technology, you can easily move everything to a smaller number of x86 servers and shut the rest down when required and then open them back up again when required. Again, it depends on deployment model and taking the time to do it right.

        I do totally agree though. HP have created a solution to at best some niche issues.

    2. Wilco 1

      Re: Power v Performance

      CPU power is quadratic with performance due to voltage/frequency scaling. Assuming all else equal it is also true when you compare high-end CPUs with lower end ones: big, fast, out-of-order CPUs scale non-linearly with increased complexity and need to use leaky transistors to achieve their high frequencies. However it is not necessarily true when comparing between different micro architectures, implementation quality, processes or ISAs (eg. Centerton doesn't look good compared to Calxeda).

      In general, if you have a parallelizeable problem, using several slower cores will be more power efficient than one fast core. You can't use too many slow cores though, as the overhead of DRAM, communication etc will eat away the efficiency gained. So the trick is to find the sweet spot where the amount of work done per Watt is maximized. My gut feeling is this is just the start, the next generation of microservers in 2014 will become appealing enough to a wide market.

  5. Paul_Sinclair


    I noticed that FPGAs were mentioned by HP. SRC appears to be a member of the ecosystem. They are a leader in FPGA technology. Is that next? Anyone have any news in that regard?


This topic is closed for new posts.

Other stories you might like