back to article IT systems capacity planning. This is hard ... but how hard? Inquiring minds wish to know

Technology in the 2020s is very forgiving, particularly if our processing happens in the cloud. By this, we mean that if things start to perform suboptimally, the issue is usually quite easy to resolve. Right-click -> Add storage. Right-click -> Add RAM. Job done. Which is fine, but it leads us into temptation – we don’t do …

  1. Pete 2 Silver badge

    choke points

    I used to do a lot of capacity planning. The actual hardest part was persuading t' management that:

    a) I knew what I was talking about

    b) that they would have to spend money

    However, the traditional process as described in ITIL and ISO9000 has largely become obsolete. What seems to happen more is that systems become vulnerable to unpredicted and often unknown behaviours due to crappy software design and architectures. Oh .... and networks.

    Things that cannot be solved simply by running down to PC World for all the SIMMs they have in stock, or invoking the magic of capacity on demand.

  2. SusiW

    Our planning ahead - until now.

    The computers I am asked to quote for are used as general workstations and are expected to have at least a 5yr working life. (Applications range from basic coding to Autodesk CAD suites, and all stops in between)

    As such, we go for systems that are not-quite bleeding-edge to avoid the massive premiums that attracts, but they will 'fly' on pretty much anything thrown at them.

    Last purchasing cycle was 2016. Minimum of 64GB RAM, latest mobo support chipset, and minimum of 12 cores (6+6). A 'fast' enterprise 1GB spinning rust was the standard boot, with slower secondary storage. Graphics was whatever the fastest non-mental-priced card at the time.

    The only upgrades we've done since 2016 is change out the boot drives for SSDs and give our users new video cards where required - intensive CAD users.

    Win11 has put a bit of a downer on this long-term strategy, but as hardware scarcity and stupid prices have skewed the market, our poor users will just have to soldier on for now.

  3. The Basis of everything is...

    Never trust the users

    Last year I was called in to help with sizing a new system as the infra team were pushing back on the requests despite the functional experts insisting it was as per guidelines and signed off by the users. To be fair, most of them had English as a second language, and some key people were only accessible via translators and would not join calls.

    The critical questions were quite easy: Average number of shipments received per day, average number of different items in each shipment to give an average number of boxes of stuff to be processed each day - there's a bit more to it for the fine detail of course.

    The answers given turned out to be the number of shipments received per day, and total number of parts in each individual line item on the delivery note. When a customer is bringing in small things like washers and screws in boxes of 1000 at a time, and they're buying by the pallet too then one movement suddenly becomes millions.

    This reduced the sizing down from 37 TB to 1.8 TB, although convincing our functional experts where they'd gone wrong and to go back and reconfirm with the customer took far longer than actually finding the source of the error. I'm still owed beer for that one too.

  4. autogen

    Capacity Planners: a dying breed

    When servers were really expensive , you had a specialised department and process-level tooling to measure performance followed by what-if scenario building. Rot set in with virtualisation and now cloud.

    As one of the handful of professional unix/windows/vms capacity planners left (and therefore unemployable), I observe that enterprises spend literally millions on real time monitoring and alerting and literally nothing in capacity management. So, spend on software warns you your IT is broken and nothing on predicting when these breaks will happen.



    Um here is my bill from Mr Cloud for new resources but there is no way to verify the rationale. You cannot make it up. When the banking regulators realise the tail wags the dog watch out for the fines.

    1. Ken G
      Thumb Up

      Re: Capacity Planners: a dying breed

      Turn it around - at some loads and variability levels public cloud makes sense financially, at others it makes more sense to buy tin. Use your capacity forecasts to show where the crossover points are between Opex and Capex.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022