back to article Core blimey! 10,000 per rack in startup's cloud-in-a-box

Say hello to hyperdense server and NVMe storage startup Aparna Systems and its Cloud-in-a-Box system. Originally named Turbostor and founded in February 2013, the company has emerged from stealth with the Orca µCloud, a 4U enclosure that converges compute, storage and networking, and offers, Aparna claims, up to 10,000 cores …

  1. Voland's right hand Silver badge

    3.5-inch hard disk drive, draws less than 75 watt

    3.5-inch hard disk drive, draws less than 75 watt

    No thanks. You need either liquid cooling or the airflow of a GE or Rolls Royce engine for the A380 to keep an enclosure with 3 inch "drives" at operational temperatures.

    Now, 7.5W would have been interesting. 75 for 3 inch drive-like cartridge, no thanks, that is beyond all thermal, airflow, etc design limits for a datacenter.

  2. Neil Spellings

    I'd hate to see the thermal footprint of 1000 cores in a single chassis.

    It sounds a bit like HPE's Moonshot platform that already packs 45 x 8 or 16 core Intel XeonD's into a 4.5U chassis.

    1. Anonymous Coward
      Anonymous Coward

      Exactly...

      60 slots at 75 watts each = 4500 watts per 4U unit; ten in a rack is 45kW per rack.

      A typical colo rack will provide 2.5kW or 4kW total. Good luck finding someone to host more than ten times that!

      It would make an effective space heater though...

      1. Anonymous Coward
        Anonymous Coward

        Re: Exactly...

        Maybe you can wrap copper pipes around, fill the pipes with water Then use the steam for electricity.

      2. Korev Silver badge
        Megaphone

        Re: Exactly...

        Not to mention the noise from the fans to cool the thing.

        What you'll need to speak to other people in the datacentre ->

      3. Fortycoats
        Flame

        Re: Exactly...

        I imagine it's like the exhaust of the batmobile.

        Just put some meat on a skewer, instant BBQ!

      4. tflopper

        Re: Exactly...

        I know of carriers who do 45kW per rack in air cooling, there are data centers that can support this level of airflow, most of the newer ones that are designed for web-scale can do 30kW / rack pretty easily. So that level of density and power is not unheard of and certainly achievable if done right.

  3. This post has been deleted by its author

  4. Anonymous Coward
    Anonymous Coward

    Eh?

    "GPS clock to support applications that require precise timing"

    Am I missing something here.

    Nearly every data centre I've been in tends to be a big tin hut with 0 chance of getting a signal into, let alone once in a rack surrounded by a mass of other kit....

    Or is there some sort of external ariel connector?

    1. Anonymous Coward
      Anonymous Coward

      Re: Eh?

      "Or is there some sort of external ariel connector?"

      You can rent roof space for antenna or sat dishes in several DCs I have used.

  5. John Smith 19 Gold badge
    Unhappy

    30Kw a rack?

    My how times have changed.

    I think DARPA were talking about a Petaflop to simulate human intelligence.

    So how much can you put in a rack of these?

    1. Korev Silver badge
      Boffin

      Re: 30Kw a rack?

      Petaflop machines are now approaching routine; you can see them move from national centres to bigger university setups now. It'll be very interesting to know if DARPA are correct. The computer for the Blue Brain project is almost there with 839TF now.

    2. Anonymous Coward
      Anonymous Coward

      Re: 30Kw a rack?

      Quick google gives these estimates:

      * 38 Petaflops and 3.2PB of RAM (2010)

      https://www.wired.com/2010/08/reverse-engineering-brain-kurzweil/

      * Similar for a cat-like brain in real time (2010)

      http://www.strategicbusinessinsights.com/about/featured/2010/2010-06-aicontroversy.shtml#.WQmoilPyt_8

      * 1 Exaflop (2012)

      https://singularityhub.com/2012/11/01/the-race-to-a-billion-billion-operations-per-second-an-exaflop-by-2018/

      A Xeon chip can do "more than half a teraflop" when running the right instructions:

      https://www.microway.com/knowledge-center-articles/detailed-specifications-intel-xeon-e5-2600v3-haswell-ep-processors/

      So your rack of 600 processors might be roughly 0.3 PFlops. You'll need somewhere between 125 and 3300 racks, consuming from 6MW to over 400MW.

      Frankly, I'd be surprised if it hasn't been done already somewhere.

      Furthermore, once research has demonstrated what the best algorithm is for simulating a human brain, then you can build custom silicon to do the grunt work, which should be substantially more efficient.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like