back to article Can virtualisation rejuvenate your old servers?

It is a commonly held belief that most servers sit around doing very little most of the time. So, according to the theory, it makes sense to take advantage of that otherwise wasted capacity by loading up a hypervisor and running multiple virtual servers on the same hardware. If only it were that simple. This kind of server …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Maximising utilisation

    I've always seen the aim of maximising resource usage per se with VMs as a poor objective. To get close you'll be hosting too many machines on a piece of hardware with far too much potential resource contention. Better to use it for machine partitioning and mixed OS usage and get better but not full utilisation of the hardware. VMs advantages lay in snapshotting, segregation, and vmotion.

    1. Pete 2 Silver badge

      Virtual duplication

      Yes, if your systems are resource-limited, virtualisation is the WORST thing you can possibly do.

      When all the marketing bumf is stuffed down the sales-person's throat to get a few seconds of peace to quietly reflect on the mechanics, it becomes instantly obvious that when you have a limited amount of RAM (for example) you really, really don't want to waste it running multiple copies of an O/S, each in it's own VM - when all you need is a single O/S instance running on the bare metal.

      Similarly, when you are I-O constrained, do you really want to have lots of bitty little instances, each with their own version of a disk block cache - all caching the same data? Or is it better to have one honkin' big cache with a longer tail, that achieves a better cache-hit ration and therefore REDUCES the number of IOPS?

      Even more, when your hardware is old (and unlike IT staff, it doesn't improve with age) a hardware failure can bring down half a dozen VMs rather than just the one, single fully spec'd instance. Now that isn't the real problem - the real problem is the time needed to restart the virtualisation system and THEN all the VMs that depended on it. Can it possibly be faster to reboot 6 VMs than to restart one instance of a server host?

      So really all virtualisation does is slice the pie into segments and hand one to each application - rather than trusting each application to not take too-big a bite every time the pie comes back round to them. It's still the same sized pie (except for the crumbs lost in the slicing-up process) and the only benefits of slicing it into different VMs is to stop one virtual environment from talking to (or leaking to) another. It doesn't magically make the pie - sorry: computer bigger/faster/better all it does is make more work for more sys-admins to monitor and try to keep running.

      1. Anonymous Coward
        Anonymous Coward


        but if you care about downtime you have multiple hosts and move the vms you don't want to reboot in real time to a host that doesn't have a problem. Running VM's has been a blessing. You need to know what you're doing though, but it's dropped the time spent keeping things running a lot.

  2. _Absinthe_

    any real world experience with virtualisation?

    or is the author just regurgitating what he's heard elsewhere?

    "On already maxed-out servers, virtualisation will be a total waste of time" - this couldn't be further from the truth. Maxed out servers are more likely to experience component failure due to the consistently high workload on those components, and would therefore benefit extremely well from virtualisation since they will gain a high level of resiliency/portability/flexibility (assuming shared storage is used, which is a given with any even half-serious virtualisation project), than if they were left as a physical box. There are also other benefits for these servers, such as being able to dynamically provision extra resource to better cope with increasing workloads over time, without having to take the application offline etc.

    1. kosh


      all of those benefits flow from using shared storage and a multi-tasking operating system. none of them are really attributable to virtualization.

  3. Nate Amsden

    yes but only a stopgap

    I was at a company about a year ago, part of my initial push for virtualization was using old hardware. We had several HP DL585G1s each with 64GB of memory and 8 CPU cores(Quad socket dual core Opteron), so we deployed on those, and ran for about a year or so, no real issues. Added some network ports an fibre channel storage connectivity. The CPUs were from ~2005 so performance wasn't the best but they did the job. Though not everyone has this type of system sitting around. CPU usage averaged sub 30%, and memory usage averaged ~80% towards the end. And we ran on vSphere standard edition, as the VP wasn't convinced to pay for anything more in the early days. As usual, once they saw how well vSphere worked they were more than happy to shell out for higher end versions of it on the next round of hardware.

    The capacity and power usage of those systems however, just by getting a single quad socket opteron 6100 with 512GB of memory would of smoked all of our DL585G1s (had 4 or 5 I forgot), quite a bit of saved power too. Of course the 6100 wasn't available when we first deployed those 585s with ESX.

  4. Anonymous Coward
    Anonymous Coward

    VM isn't a panacea

    major issue isn't technical, its managing expectation. VM saves the old application from the scrap heap but customers then expect "Forever" free support/maintenance, rather than purchasing bare metal to replace it, customers believe their old hardware will do, problem is it doesn't exist, explaining that is a major headache with procurement processes having not caught up with a the idea of a boxless box, sending out asset stickers for assets they don't own or have. Prices for replacement systems seem to vary depending on who you talk to and often costs for fundamental licencing can get merged into to "We bought a machine in 2000 its still there" can't we use that it says quad core xeon" rather than pentium pro from the original purchase but why would we buy a new machine....Head banging stuff!......We save the old application we reduced the impact on the environment and the business but we now have systems that will not die and the customer is always right!

  5. Anonymous Coward
    Anonymous Coward

    I suspect

    That virtualisation is often just a gift from the marketing men to the salesmen.

    It's the sort of thing that is easy to sell to management, because they do not know what a proper multitasking machine is and what it can do. File servers with idle time? Combine them! Switch off some machines, or turn them over to running other stuff

    Then again, I hear people implying that, with three VMs you can get three times as much out of one machine. Quarts and pint pots!

    There is obviously a purpose to virtualisation. Partitioning of machines is as old as the err... machines, if not the hills. Multiple OSs; Multiple environments; isolation; testbeds etc etc.

    Even at home I make use of VMs. It is quicker and easier to start up XP in a VM than it is to reboot to XP and reboot back to Linux afterwards. I have a choice of eight different OSs/Versions for testing and evaluation.

    Virtualisation is, obviously, great. I just have a nagging feeling that it is often miss-sold.

    Disclaimer: have been retired for nearly a decade. Maybe I'm just out of date!

    1. JW Smythe


      No, you're pretty close to on target.

      Where I work now, we had a whole mess of servers running, and a power bill around $7,000/month. When I inherited the mess, we started looking hard at what each one was doing. Many were just on because "they've always been on". Some did trivial functions and had no load on them.

      Rather than reinventing their horribly designed wheel, I went the VM route. I built out a couple new servers (mostly gaming parts in a mid-tower case) with 6 3.8Ghz cores, 16TB storage and 16GB RAM, for about USD 1,500/each. I use VirtualBox for my VMs, because I've been very happy with it over the last few years. And yes, I've tried the others.

      VMWare has a migration tool, to move the running OS from physical hardware to a VM. A little black magic later, and all the low usage machines were moved. It made quick work of it. I did the migrations remotely, shut down the source machine when I was done, and then had someone local to the servers go and unplug them.

      I should clarify at this point. The "Low usage" servers had things like the accounting software, Active Directory servers, miscellaneous file servers that people just *HAD* to have, and the mapped drive letters couldn't change. For some of the applications, I did not have the option for installing them on a new server. Disks were misplaced or some special technique to get them to work left the company with previous employees.

      All in all, the two machines that I set up are handling the job very nicely, and have plenty of resources for other tasks.

      There was some discussion of moving the mail server over, which I vetoed. Mail servers thrash away at ditching spam, processing inbound and outbound queues, and dealing with the horrendous user requests (hey, lets search our 10GB mailbox for the word "A").

      We went through a cycle of pulling network cables to machines that didn't look like they were doing anything, and letting them sit for a week. Guess what? Most of them weren't doing anything.

      One advantage to this was memory usage. I was concerned that some of the old servers wouldn't survive a reboot. I also priced memory and found that a decent upgrade would cost several hundred dollars each. It's hard to justify that kind of money for machines that people barely use.

      I was lucky in that the machines were single or dual core machines with up to 1GB RAM, and a primary drive of around 20GB. Ya, that old. I gave the VMs more memory as needed. Some of them were swapping horribly because they really needed about 1.5GB RAM. Voila, swapping problem solved.

      The users are happy that the machines are much quicker now. I'm happy that I can log into two machines and have the "consoles" in front of me.

  6. Tom Womack

    Three years isn't that long ago.

    The average three-year-old server was bought in mid-2008, so will have dual Harpertown quad-core Xeons, not 'single or dual cores'.

  7. Anonymous Coward
    Thumb Up

    "often mis-sold"

    "Virtualisation is, obviously, great. I just have a nagging feeling that it is often miss-sold."

    You and me both.

    I paid my dues in an era when proper computers could quite happily do more than one thing at once, even run more than one application suite at once on a good day. What's more, they could do it securely (most of the time, given reasonably competent sysadmins) because security was designed in from the ground up rather than being added on as a band-aid after the event. These computers could even do things reliably, given sufficient thought and investment.

    Then along came Window boxes, "cheap" software and "cheap" hardware.

    Then along came server consolidation, and virtualisation.

    Next week it's the cloud's turn.

    There are a lot of people with short memories and big budgets (for advertising, or for purchasing).

    Once upon a time the corporate Information Services department served the needs of the business. Nowadays the IT Director thinks his "estate" (and I don't mean a car) drives the business, whereas in many cases it's really a drag on the business.

    "have been retired for nearly a decade"

    Hope you're enjoying it. By the sound of it you wouldn't be popular in the modern IT department. Unpopular doesn't mean wrong, though.

  8. David 39

    yeah..... but

    this doesn't waste electricity..... booooooorrrrrrrriiiiiiinnnnnnnngggggggg

  9. Drummer Boy

    Tried it, failed it, and might try it again!!

    We did a new project for a customer and bought 2 dual quad core CPU boxes, plenty of RAM and disk, and VM Server, to run a multi web server site on (open source OS).

    The next project we went back to separate metal, as it was a damn sight cheaper than buying the same processing power in a bigger model up chassis.

    As for the power argument that was also a little moot, as we were comparing 2 bigger boxes with dual PSUs with 4 smaller boxes running single CPUs, so the draw was not much different, when actually measured going into the boxes. The web servers also took up the same amount of rack space, so no wins there either. Other than the ease of throwing up a new VM, there's nothing in it.

    On top of that we have just discovered that one of our offices is running 'x' VMs with paid for OS's and have had to spend more than £20k on additional servers lics due to uncontrolled OS deployment. That's the bit of VM world that they forget to tell you about!!

  10. Jeff 11
    Thumb Down


    Hooray, a VM article about utilization with no mention to the crucial performance bottleneck, the storage layer. For most workloads you'd get far more done if you increased your I/O throughput and decreased latency, but that's a particularly complex subject and not one a throwaway article like this has any hope of addressing.

    Also, '10GbE, which is rapidly becoming the norm on server motherboards'? Hardly...

  11. Mike Pellatt

    Key reason for virtualisation

    Don't forget the key reason for virtualisation.

    It's the only way you'll be able to run SCO Openserver (you know, the operating system that all US Defence relies on) on any modern hardware.

  12. Davidoff

    An article based on hear-say and guesswork?

    As some other poster I also do question the author's expertise in the field of virtualization, and wonder if he has any practical experience as the article suggests he's at least living on a different planet at a different year.

    "Think about it. Multi-core processors are now commonplace, whereas the average three-year-old server will have single or dual-core CPUs at best." Maybe the author has spent the last few years in a coma but intel doesn't make any single core server processors since at least 2006, and quadcores are available since at least 2007 (and AMD is not much different). Think about it! An average 3 year old server probably uses at least some dual core processor, and very likely it will already have quadcore processors. In 2009, the only single core processors still manufactured are probably the ones used in embedded devices like cellphones and other speciality applications.

    "You could easily find your processors unable to support the latest hypervisors, many of which demand on-chip technologies such as Intel VT or AMD-V before they will play ball." Yes, if your server is from 2003 maybe. intel VT is available since at least 2005, and AMD-V since at least 2006, and both were quickly integrated in their line of server processors.

    "Moreover, with Intel’s Nehalem and the latest AMD Opteron architectures you can expect huge performance gains that enable you to consolidate hundreds of servers onto a minimal number of boxes, at the same time reducing power and cooling overheads." Again, if your server is from 2003 and runs a bunch of Netburst XEONs then maybe, but out of experience replacing a 3 year old server with a current pedant doesn't necessarily give you that much more oomph, unless the old server was really low-spec or inadequately spec'd for the task.

    "The latest DDR3 RAM is, similarly, both cheaper and faster than what went before, plus you can now stuff a lot more into the average server." For your average consumer PC, maybe. In the server market DDR2 DIMMs (both Registered and Fully Buffered) have come down in price over time while the relative new DDR3 stuff (Registered or at least ECC) isn't that cheap. At the end of the day, it's probably much cheaper to upgrade a 3 year old server with more RAM than to buy a complete new system if RAM is the only bottleneck.

    This article was really poor.

  13. Anonymous Coward
    Thumb Up

    "two [remote] machines ... "consoles" in front of me."

    Virtualization is one way of achieving that, but if that's all folk need, there are others, the most obvious being servers with remote management built in (eg the Compaq [integrated] Lights Out stuff which has been around almost since the days of the Ark, or the conceptually similar but ridiculously expensive as an add-on Dell Remote Access Controller. Or there are boxes that take VGA in with kbd and mouse out on one side and offer VNC over IP on the other side.

    Once upon a time computers came with consoles that were usable over serial ports, and I believe uEFI will eventuallly re-introduce this to the world of commodity x86 (better late than never, eh guys).

  14. Jacqui

    AMD and openvz

    I use this at work and at home to host web sites, email etc.

    Overhead is minimal and an entry level diual core AMD 64 box with minimal memory can handle 20+ servers. My home box has 2TB green (slow) drives which are ideal for low hit rate web services and for feeding video to the living room.

    At work we used to have maybe 10 dev boxes *each* - today this is consolidated into three or four rack mount systems and vz's are replicated between systems.

    I have to say I LOVE openvz!

    My experience with vmware has not been so pretty - espcially if you need to call support ;-/

  15. Jacqui

    AMD and openvz

    I use this at work and at home to host web sites, email etc.

    Overhead is minimal and an entry level diual core AMD 64 box with minimal memory can handle 20+ servers. My home box has 2TB green (slow) drives which are ideal for low hit rate web services and for feeding video to the living room.

    At work we used to have between 5 and 10 dev boxes *each* - today this is consolidated into three or four rack mount systems and vz's are replicated between systems.

    I have to say I LOVE openvz!

    My experience with vmware has not been so pretty - especially if you need to call support ;-/

  16. cbf123

    missing the main reasons

    While power consumption may be one driver there are others as well:

    1) centralized management

    2) ease of failover (if host hardware is starting to fail (and you do have hardware fault monitoring, right?) then you can move the running guest onto another host on the fly)

    3) isolation from hardware upgrades (if you develop an embedded appliance, you can code to the virtual machine and isolate your system from changes in the underlying host hardware)

    4) enables running "unsupported" OS/hardware. If your app needs windows but the cluster runs linux, you can run a windows instance in a vm. If you buy a fancy new Sandy Bridge machine but your enterprise app requires redhat 4, you just install it in a VM.

    And to the doubters...10G/s ethernet is becoming the new server standard. Telecom stuff is now looking at 40Gig.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2020