back to article Highly Available: Systems design and stuff like that

Last week we ran a live video webcast on High Availability IT - and very good it was too. This webcast is now highly available - can you see what we're doing here? - you can stream it on demand through our lovely media player. Plenty of slide action too. Our panel, Reg presenter Tim Philips (who sings, badly) and Microsoft's …


This topic is closed for new posts.
  1. Anonymous Coward

    cloud stuff

    personally i think something like an internal cloud using is interesting from a linux high flexibility / availability viewpoint - but I'm sure there are plenty of people with money to spend to,....

  2. Tuomo Stauffer

    Well, maybe learn some history

    Great - but only after these systems get even near a global Tandem, oops - sorry, HP NonStop, network! Nothing wrong with all the marketing hype, that's how we all make the money, but it's really hilarious today what people don't know! Unfortunately it's mostly customers - I have been on both sides 40+ years and the customers / users (actually their management?) is always fooled with nice and bright powerpoint presentations?

    Something wrong in this picture and lately it has got even worse? IT seems to go for commodity, no knowledge need, just buy whatever the used car, oops - IT product, salesman offers! If there are some problems later, just blame you own IT people, fire them, hire some new - everybody happy?

  3. Anonymous Coward

    Distributed workload clustering and design

    Personaly for HA I like to look at all aspects of the project from DR to running cost along with expandability/scaleability and initial cost.

    If you take that approach you not only get a robust system but also tick alot of the other box's at the same time. I was never a fan of a DR site doing noting most of the time when you could combine it into the design and as such reduce costs along with producing something so much more robust and scalable at the same time across a few sites. Just sorting shortest path routing and few other aspects but much more cost effective. Also your able to just add a server to increase workload processing at the various stages - much more scaleable over the life of the system and also means you dont have to buy of the bat the processing power you need from day one as your able to easily add more power by dropping in a new box/server/blade... Split this across sites and you have redundancy/DR covered as well as leveraging local savings on comms traffic if you have a system that is global in use.

    But you do need to spend alot more time on the design of not only the application but how the data is stored and accessed and try to reduce as many of the impossed data dependencies as much as possible as well as defining those dependencies with a timeframe of tollerance.

    But even with DR or defined HA systems you still have the aspects of data replication. So best to plan as many issues out of the equation as possible and dont let the system impose any dependancies or your just approaching it all wrong.

    But it always gets down to cost and response times offset with uptime - that extra .000001% each time is expenentual in cost. Thats always going to be the case, but if you plan it right a good HA approach can actualy save you money both directly and indirectly.

    In many respects its like a car, sure you can buy a Porche and be faster than a cheap 2CV but if your using that Porche to move people, whilst it can do 4 trips to one 2CV your 2CV can carry twice as many and you can afford to have alot more of them and just buy another cheaper than any Porche should you need extra passanger space. Also cheaper to run and cheaper to service and easier to replace one of several cheap cars over a single point of failure. But I suppose you would have a spare Porche in a garage/DR site collecting dust alot of the time.

This topic is closed for new posts.