back to article Where’s the best place for your infrastructure bottleneck?

As technology evolves, bottlenecks in the infrastructure move around. The switch speed leapfrogs the server speed, then the servers are upgraded with faster LAN cards and the spinning disks in the SAN become the weak link, so you upgrade and find that the SAN fabric is holding you back. How does everything interact? And as the …

  1. Dominion
    Flame

    Tell us something we didn't already know?

    "Yes", said the development team, "we know the code is shit but it'll take years to rewrite it, throwing tin at it will be cheaper in the long run..."

  2. Anonymous Coward
    Anonymous Coward

    You're doing IT wrong

    Doing both the hardware and software sides I've always kept an eye on the bottlenecks. I'm not a patient soul to begin with and it's not like I can point a finger elsewhere other than me and myself. (I refuse to ever blame my team.) Within a given budget the only place I had any flexibility was my/our time so it's monitor, tweak, test, repeat. If anyone's got a better approach, I'm all ears. And I have awfully big ears!

    btw:repeated experience here is that the I/O channels disks to servers is my consistent constraint. That pesky budget. I'm almost always aware of best price/performance everywhere else including software. Storage is pretty hard pulling signal from noise.

  3. Nate Amsden

    i/o capacity

    The article implies you can make your storage faster by making the pipe (bandwidth) bigger. In my experience at least the pipe is almost never taxed (even 4Gb FC). I know there are cases that it is but I suspect they are in the minority.

    Of course experienced tech readers know this already.

    I'm more concerned with queue depths at lower speeds (mainly because older gear has smaller queues) than througput.

    My servers are overkill but the cost isn't high. 2x10G links for VM traffic 2x10G links for vmotion and fault tolerance 2x1G links for host mgmt and 2x8G FC links for primary storage (boot from SAN). With exception of FC everything else is active/passive for simplicity.

    11 cables out of the back of each DL38x system with power and iLO. Good thing I have big racks with lots of cable management. 4 labels per cable and it takes a while to wire a new box but we add boxes at most twice a year in the past 3 years.

    Maybe someday I'll have blades

  4. Alan Brown Silver badge

    Missing the point.

    "Do you go for SAS or SATA, or 7.2k, 10k or 15k spinning disk, or one of today's flavours of SSD, or one of the new flavours that are promised for a few months' time?"

    80% of system speedups are achievable with code cleanups. only 20% with hardware.

    The winning formula for storage systems at the moment is hierarchical caching systems (big SSDs in front of spinning oxide - in both directions), but the problem is that whilst it's a known quantity (ZFS), there are a bunch of snake oil salesmen who think it's simply a matter of bolting ZFS onto their existing RAID hardware (it isn't) or that you can skimp on ram (you can't).

    Get it right and it will sing like a well-tuned Lamborghini. Get it wrong and you may as well be driving a Trabant. Either way if you slam your array with shedloads of conflicting random reads/writes it'll turn into a snail on valium (see comment about where 80% of speedups come from)

    As for ethernet: Until the last 18 months 10GE was simply too expensive for general use (especially at Cisco pricing). Broadcom's trident2 chipset has turned that on its head. Unfortunately 10GE copper is still distance limited (needs cat6 or better cabling to go more than 30m, realistically you'll be lucky to go 20m on cat5e based on our experience - the distances are optimistic and patch panel quality is a big factor.) and power hungry (~4.5W per tranceiver, vs laser transceivers using around 100mW) - and that power consumption is outside a SFP+ enclosure's specs, so you can't run a mix of copper/fibre on the same switch unless the maker provides it from day one (the only solution is a chassis switch or a mixed stack of copper/SFP switches)

    Bonding/Etherchannel/LAG/LACP works up to a point, but individual data flows are limited to the the individual link speeds (usually 1Gb/s) and more often than not that individual link has to carry _all_ communication between the server and client pair (most switches don't have the smarts to do L4 distribution). Depending on hashing it's quite possible to end up with one leg being maxxed out whilst others are barely idling. I see this regularly but it's usually because one client is drinking from a firehose. The fix is to switch to 10Gb/s as we replace kit - and get really friendly with fiberstore.com as local vendors are taking the piss on AOC and twinax cables.

    The REAL bottleneck is your router. Even if you have TRILL switches, as soon as you talk between IP subnets then the traffic has to be gatewayed somewhere - often tromboning across the data centre to end up back in the same rack. This is being worked on - see https://tools.ietf.org/html/draft-ietf-trill-irb-05 - but such implementations have yet to be deployed (Advice from the vendor for my TRILL systems was originally that distributed routing would be Q2-3 2015 but this has been delayed to Q3 20150-Q1 2016.)

    The real lesson is that speed/performance costs money. Efficiency costs money too, but usually less.

  5. Dave 107

    #InfrastructureMatters

    Let me start by saying I work for an infrastructure vendor. What I see everyday appears to be different from what you're suggesting.

    1) Data is moving to the edge, and peer to peer - more storage in servers and software defined, vSAN, etc. Wringing out every bit of latency matters. Lots of I/O bays.

    2) Flash and Tiering are the new defaults. Flash Dimms, PCI Flash, SSD's. The price performance barrier has been broken. Why buy 15k when for a couple of dollars more, you get Flash.

    3) Fibre Channel is alive and healthy. 16 isn't 40 or 100, but 40 and 100 Ethernet switch ports are still pricy. There may be a switch, but it's 18 months out. By then FC will ramp up. Consider security as well when you put everything on Ethernet.

    4) The server is becoming the switch - OpenStack, NSX, etc.

    5) It's expensive and time-consuming to re-write code. Putting the code in memory or on flash is fast and easy band-aid.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like