back to article Facebook unfriends 19-inch data center racks

Social media giant Facebook had built precisely one data center in its short life, the one in Prineville, Oregon, before it had had enough of an industry standard that was part of the railroad infrastructure and then the telephone infrastructure build outs and bubbles: The 19-inch rack for mounting electronic equipment. Open …

COMMENTS

This topic is closed for new posts.
  1. Paul Crawford Silver badge
    WTF?

    Oh FFS do it right!

    So let me get this right, a 19" rack is too small, so lets go non-standard for only 4" more?

    Why not go for 2*19=38" wide and be done with it, so you can mound old and new stuff in one rack?

    Somehow I doubt that a few inches are so make-or-brake for cooling, and the real issue is just what is in a box and how it is cables. Most racks seem to end up messy for cabling, more so if you have servers on sliders to gain access and so have big loops on supports arms at the back. Why not have some "plug in" rack so the inter-unit cabling can be fixed, and the unit pulls out completely for repair, etc?

  2. Allison Park

    its time to kill blades and go back to nodes and chassis

    From what I have seen IBM's Puresystem has the right strategy. Implementing 23inch racks would create a nightmare and does not really solve the problem. Blades are a bad form factor in todays compute environment. Blades came out before widespread x86 virtualization and the main promise was to cut the compute space in half.

    Today it's about 30-1 virtualization compression ratios and having enough i/o to handle that capability which is better served by nodes vs. blades.

    1. Anonymous Coward
      Anonymous Coward

      Re: its time to kill blades and go back to nodes and chassis

      Agree, I was just writing about this in a forum about Dell's new "rapier thin" blades. People seem to want to use smaller and smaller blades because they think increased CPU density does something for them. If they are running a rendering cluster, or some other processing intensive workload, they are right. In most situations, people don't come anywhere near tapping out CPU before they run out of DIMM or IO... usually DIMM. Nodes make more sense 90% of the time. SMP racks make even more sense if you are trying to cram tons of memory in a box to run a mega virtual server or big database server, but then you have more networking and management hassles. Blades will save a bit of power, cooling, and space, but most people have plenty of power, cooling and space (unless they are facebook). They don't have power and cooling which is built for ultra-dense blade racks pulling 10-20 Kw per rack... so implementing blades everywhere usually requires a reworking of the data center, or a bunch of half full racks.

      1. A Non e-mouse Silver badge

        Re: its time to kill blades and go back to nodes and chassis

        I used to look at blades, but I hit the same problem: You just couldn't put enough memory onto the darn things.

        Then there was the cost. Blades, chassis', etc cost a lot of money. It's way cheaper to get a shed load of 1U boxes. Yeah, they're not as dense, but with the power/heat load of modern systems, that's less of an issue.

        Blades will save a bit of power, cooling, and space, but most people have plenty of power, cooling and space (unless they are facebook)

        I looked at putting some kit in a commercial data centres. I would have had to rent twice the number of racks than I wanted, because they couldn't handle the power/heat load I wanted to put in. (They had a limit of 3KW per rack)

        1. Anonymous Coward
          Anonymous Coward

          Re: its time to kill blades and go back to nodes and chassis

          The plural of chassis is chassis.

          Whilst it is easy to see how there could be confusion, a very definitely incorrect way to form a plural is by using an apostrophe in any case.

          1. A Non e-mouse Silver badge

            Re: its time to kill blades and go back to nodes and chassis

            Whilst it is easy to see how there could be confusion, a very definitely incorrect way to form a plural is by using an apostrophe in any case

            Mea maxima culpa.

            Time for me to re-read Eats, Shoots & Leaves me thinks.

  3. Nate Amsden

    i agre with paul

    stupid idea ... build up higher rather than wider. e.g. shipping containers with ~57U racks.

    Myself I love big racks (both kinds). I have grown quite fond of Rittal's 47U, 40-42" deep, but most importantly is the 32" wide (still 19" on the rails), makes life easier for cabling and PDUs. Most facilities today you run out of power long before you run out of rack space so consuming a few more inches of floor space on either side (8" total vs a typical rack which I believe is 24" wide) doesn't really cost you anything, and gets you tons of space for cables. I'm sure other manufacturers have similar racks, Rittal is pretty common though.

    I was very impressed with Microsoft's container/rack design when it came out last year, that is quite impressive, seems leagues ahead of facebook's first generation design (not sure if they've revamped it much since the original release).

  4. Benjamin 4
    WTF?

    If they're gonna go non-standard then they should do something more than add a couple of inches. This seems like they're going non-standard for the sake of shouting hey look at us. I've seen racks of 2kw power amplifiers. Now you can't tell me that they don't take more power and produce more heat than servers. And all those nice thick power and speaker cables, and shielded audio source cables that refuse to bend with a sledgehammer! If they work fine in 19" racks then servers will work fine as well.

    And if you're going to change a decades old standard make it a revolutionary change.

    1. A Non e-mouse Silver badge

      I've seen racks of 2kw power amplifiers. Now you can't tell me that they don't take more power and produce more heat than servers. And all those nice thick power and speaker cables, and shielded audio source cables that refuse to bend with a sledgehammer! If they work fine in 19" racks then servers will work fine as well.

      I've done some small work with this audio (and lighting) kit. You're right, they all fit in fine. But you don't have have racks of these amps sitting back to back, crammed in as densely as possible.

      And those nice fat three-phase power cables fitting it ? Don't make me laugh. I've done enough gigs where there was two or three foot of floor space behind the racks taken up with the all the three phase power cables.

  5. jake Silver badge

    I'll give up my 19" racks ...

    ... when you pry them out of my cold, dead data center.

    To say nothing of my audio & video gear ...

    More seriously, I'll not change what I have now. Why bother?

  6. Wanda Lust

    US telecom form factor maybe

    19" is the EIA racking form factor that telcos may have standardised in N America but I started my career in UK telecoms and the racks were definitely wider and higher, so much so that each aisle had a fun travelling ladder. A big thick copper busbar fed -50V DC along the aisle.

    Interesting to see the mould/mold being recast but when you're defining technology at this industrial scale you do have the freedom to make the rules at triple rack level.

    In my day job nowadays I see few environments that require more than one commercial blade chassis (not to say there's still many over provisioned CPU & memory shops out there).

    1. swschrad

      Re: US telecom form factor maybe

      which form factor is 24 inch wide racks, heights of 6,7, and 8 feet standard. but add in supports in class-3 earthquake areas and such, and the racks could easily be extended to 16 feet. the "battery" busses are positive-ground 48 volts nominal, which means 53 volts in the rack.

      the kit is standard, stocked, 100 years old and all the costs are amortized already. you have heat, power, cooling, density, weight standards already.

      your cost to adopt: zero.

      getFacedbook obviously hasn't done any research.

  7. Robert Heffernan
    Thumb Up

    I like it.

    I know that 19" racks are pretty ubiquous and you can get gear for them everywhere, the point here is for everything else 19" is a perfectly acceptable and decent standard.

    It's not about how much thermal load you can stick in a rack, like the racks of power amplifiers, it's about more efficient use and better organization of space in the data center. The 21" I/D racks will let you stick 3 slimline mainboards into a chassis, no chassis space is wasted with a mains switchmode PSU, a single OU is slightly taller than an RU for better thermal flow, the Power Supply is centralized with mandatory redundancy and dual ac/dc supply options for battery backup. The locations of Ethernet switches are standardized but the user can relocate them elsewhere if desired.

    I sat down and read the spec, for the data center which is why it was designed, it makes loads of sense, and you can mount 19" gear in it if required. I see it as nothing but a Win and aside from the spike in organ donation sign ups, it's the only truly useful product Facebook has designed!

    Hell, I work in an engineering shop and I want to build one for the shits and giggles!

    1. JimC

      But isn't that just an arrgument

      against the essential dumbness of having a pair of mains transformers in every damn little unit? You wouldn't do that with a blade, so WTF do we do it with servers. To me an optional replacement for the PSU in the server units that's just a simple minimal distrubition board which takes a feed from a a triply redundant single PSU in each rack.

  8. mhoulden
    Pint

    Fine, but where do you install your rack mountable fridge for your beers? Just a shame Canford don't do 19" rack mount wine racks any more.

    1. jake Silver badge

      @mhoulden

      Who the fuck would spend that kind of coin on a simple fridge?

      The fridge in my machineroom/museum/mausoleum/morgue is a fridge. Mostly, I store film and photographic paper in it. It's about 6 feet tall, and cost all of US$395 about six years ago. It doesn't have a freezer. It does contain beer (usually), and water (always).

      My wine & root cellar is under the house ...

  9. Anonymous Coward
    Anonymous Coward

    Double fail

    Standards are great. They're even better when some upstart bozo with no experience thinks his ideas are better than what everyone else has been doing for decades, and comes up with a new one. Just swipe aside the entire installed base, why don't you. NIH FTW. Or if it isn't NIH, show how; the onus is on the proposer here.

    So you got tired of 19"? Want to go to 21"? What about that other telco standard, 23" then? Or if you must change over everything, why not go all the way and get with the times, go metric? Too hard? But expecting the world (95% of which is metric already, along with most engineering even in backwardia and all science) to go along with your new non-metric "standard" isn't? Why, good show there... bitch.

    1. Matthew 3

      Re: Double fail

      "why not go all the way and get with the times, go metric? "

      BMW tried this with wheels for certain models, I seem to recall, in the 90s. I guess proponents of Teutonic efficiency dislike the mixture of inches (wheel diameter) and mm (tyre width).

      If you're adamant about keeping the original wheels on one of those models every new tyre you buy costs very silly amounts of money.

      Thinking an idea is better idea doesn't mean it'll pan out the way you want. For another example take a look at Brunel's 'broad gauge'.

      1. Anonymous Coward
        Anonymous Coward

        And that is relevant, how?

        There's plenty of things that aren't metric, but for which the parts aren't exactly cheap either. But even disregarding that, the comparison is silly. Do you care about "original parts" for a datacentre? I thought the idea was to stuff such a thing to the brim with the latest and then regularly upgrade all that.

        All I would care about there is parts that fit, for which standardisation is a good thing, in fact easily more important than the "quality" of the standard itself. The gain from larger-scale production driving down costs usually outweighs local inconveniences. So this here trick is expensive.

        So if you go down that route, start from scratch and such, at least kindly refrain from doing things that make you look like a bumbling hobbyist, like using customarily confusing units the rest of the world has all but gotten rid of.

  10. Anonymous Coward
    Stop

    Nice rack

    World+dog in telecoms uses 23" racks, not 19". 19" is for those pussycats in IT with their laughable Dells. Wonder why Facebook didn't choose 23" racks?

    More seriously, can anyone explain to me why data center designers try to cram everything in on a horizontal server orientation? Seems to me that you could use the case-less cookie-sheet server designs for density and turn them vertically so that you could pack more in, and have a more natural bottom to top airflow.

    1. Anonymous Coward
      Anonymous Coward

      Re: Nice rack

      "Seems to me that you could use the case-less cookie-sheet server designs for density and turn them vertically so that you could pack more in, and have a more natural bottom to top airflow"

      Actually, we DO use caseless cookie-sheet server designs and turn them vertically -- that pretty much describes a blade server.

      As for the rest - I was wondering how many people would point out the major fail in the article claiming that telcom standard was 19". In one of the datacenters I built included extra 23" racks and bought some reducing ears to reduce them down to 19" for the few pieces of equipment that needed them.

  11. JaitcH
    WTF?

    If not the 19 inch standard, then why not the 23 inch standard?

    The 19-inch (482.6 mm) rack claims their origin as mounting systems for rail-road (railway) signalling relays, It has served many industries well. It's even used in ships.

    If this minor league server centre player is unhappy about 19 inches, why not switch to the 23-inch (580 mm) standard which even the EU recognises as the ETSI rack, relating to the European Telecoms Standards Institute.

    The vertical spacing is identical and the 19 inch adapter pates are commonly available.

    It appears that Facebook's Frank Frankovsky doesn't know his rack hardware too well.

    Finally, if the 19 inch isn't satisfactory because it gets overfilled, why does Frankovsky the larger standard won't suffer a similar fate? Don't believe me, just go check-out an airport baggage carousel.

  12. This post has been deleted by its author

  13. localzuk

    Overfilling?

    Surely if people are overfilling, or badly laying out their racks, that means those people are at fault and it isn't the rack itself at fault?

    Not to mention, so many organisations buy poor quality racks which lack space for cable management too.

    We don't *need* a new standard for racks, we just need people to work with what is available properly.

    1. graeme leggett Silver badge

      Re: Overfilling?

      If the problem is overfilling of racks then it's an operator problem not a problem with the 19-inch format.

      If more cooling is required then perhaps they should either

      1) only fit as much in a rack as they are supposed to

      2) set out more space between the 19 inch rack frames to keep cabling tidy and allow more airflow.

      That said, perhaps a new format would be a good idea. But lets' move on from 19inch and 23 inch terminology and have one called 0.6 metre (or meter) since the world is more metric than Imperial these days.

      One last thought. Are facebook pushing a different format to get it adopted as standard then there will be more production and it won't cost them as much as buying custom?

      1. TeeCee Gold badge
        Mushroom

        Re: Overfilling?

        Oh come on!

        When has there ever been a "Mine's bigger than yours" argument held in anything other than inches?

  14. Anonymous Coward
    Anonymous Coward

    Frank Frankovsky

    Top comedy American name there!

    1. Gazareth

      Re: Frank Frankovsky

      Finally, somebody commented on it :)

      Cracking name!

  15. Anonymous Coward
    Anonymous Coward

    solution...

    Give every eskimo home a chuffing big tower server bolted outside like an AC unit. This way it keeps cool and warms the eskimos house.

    Otherwise who cares? just make memory sticks smaller, use 2.5" SSDs and dinky fans. No need to reinvent the wheel.

    Lot of cost for 2"

  16. Nick Ryan

    Dell

    No matter what size the rack cases are horizontally, the damn Dell engineers will doubtless still make their rack mounted kit inexplicably longer (deeper) than anything else.

    The problem is not so much the horizontal width, it's the problem that in the real world every bit of kit tends to need cables attached to them to operate and it's the cable management - including suitably separating power from data that's the big issue. It doesn't matter how neat you try to be, as soon as you start to have to swap kit out the nightmares start. If this were solved acceptably, then the issue with airflow ought to be fixable as well - as in a rat's nest of cables isn't usually the best for airflow.

    1. J. Cook Silver badge
      Go

      Re: Dell

      We started not buying the cable management arms with our dell servers, primarily because in a rack full of machines, they caused more airflow problems then the cabling mess they solved. And in most cases, if we have to do things like add memory or replace non-hot swap componants, we have to unplug the power supplies anyhow.

      We are fond of the double-sided velcro wrap as a means to manage cabling bundles at least.

  17. Michael H.F. Wilkinson Silver badge
    Joke

    The only correct width is 42 (insert unit of choice)

    obviously

  18. Neil 38

    They're basing an entirely new standard on the one thing in computing that has an expiry date on it, the 3.5" HDD?

    I'm sure there are far more rack equipment consumers using co-location facilities than there are fortunate enough to have the budget to build an entire data centre for themselves from scratch. I really do hope this standard goes nowhere.

  19. Jame_s

    they're forgetting the garage rule

    that is, no matter how big the garage is, your crap will expand to fill the space.

  20. Milo Tsukroff
    Coat

    19" rack built minicomputers too

    Doesn't anybody remember that minicomputers were built on 19" racks also? Digital Equipment Corportation (DEC), WANG, that I know of, probably others also.

    I'll get me coat.

  21. ratfox
    FAIL

    What's with the back to article link?

    There is normally a link at the top of the comments allowing to go back to the article. For some reason, on this article, it this disappears behind a cloud banner...

This topic is closed for new posts.

Other stories you might like