back to article Excessively fat virtual worlds – come on, it's your guilty secret

Now that virtualisation is seen as a robust and mature technology, managers and administrators are looking to reduce server deployment and management costs further. One area of potential cost reduction is reclaiming unused or under-utilised infrastructure capacity. Most virtual estates that have grown organically over the …

  1. Alistair
    Thumb Up

    I agree

    100% with this article.

    And I have 8 application sets that I'm about to P2V -- with *all* the corollary panic amongst the app owners.

    "But its a VM, we'll need MORE cpu!" is such a common statement in the app/dev crowd, I really don't understand why --

    1. Lee D Silver badge

      Re: I agree

      Apart from live Exchange and SQL servers, virtually (sorry!) everything I've virtualised has ended up using less RAM, less disk, less CPU and less network.

      Some of that is just because of over-speccing, some of that is just having a hypervisor that can properly cache all the boring parts that all the VM's use so they boot much faster, some of that is just plain fact that the machine sits idle much of the time.

      And, so long as the machines don't all peak simultaneously, they are able to have a massive amount of resources "on-standby" whenever the need arises for that one-off operation.

      From now on, whenever a random vendor wants to give me a machine for whatever specialist software, they'll be put onto a client which is on a clean image. When they've finished poncing about installing product X that's so special it needs to go on a machine all its own, I'll just clone the machine to a VM. Some of my suppliers already offer the "we'll just give you an pre-configured VM" product anyway. The ones that don't want to co-operate, they'll be put onto a RDP session to a VM instead of a full machine and hope they don't notice until too late into the process.

      I VM'd all our servers when I arrived at my last workplace. With less actual hardware, we actually have much more capacity (twice as much as necessary as I added a lot of redundancy etc.) and the ability to do all kinds of fancy stuff. And only the SQL and Exchange servers actually demand a decent amount of RAM from the system - everything else is tottering along at a couple of Gig quite happily. Wouldn't want to deploy a full machine with ONLY 2Gb as a server, but as a server-VM, they are more than happy after booting to release all their RAM. Same for CPU (allocated most things as quad-cores, lucky if they see 1% CPU on average). Same for disk storage.

      1. SImon Hobson Silver badge

        Re: I agree

        > When they've finished poncing about installing product X that's so special it needs to go on a machine all its own, I'll just clone the machine to a VM

        Which is fine until the client whose system it is has a problem, and the vendor just plain refuses to support anything.

        Your client points at you, the software vendor points at you - and you have no answer to the fact that you did something the vendor explicitly said you must not do.

        Yes I've seen that with a specialised software package. Absolutely no reason whatsoever, complete crock of sh1t, but that's what the the software vendor insisted on.

        1. Lee D Silver badge

          Re: I agree

          It's virtualised. You have hassle, you put it back on a real machine and/or stick them on any machine and run the virtualised image from it.

          The beauty of a virtualised image is that you can do this. Going from physical to virtual, however, is more tricky.

  2. Lusty

    Cracking article, more of this type of stuff please Reg.

    When a vendor asks for 16 cores and 32GB memory, ask them to provide permon logs from another similarly sized customer install. It's not the vendors responsibility to correctly size your solution, and it's not their job to ask the right questions but they should be able to justify what they are asking for as a minimum.

    Another issue not addressed in the article is that of IT staff generally not knowing enough about computers to make these decisions. The difference between free and available memory in Windows is something almost everyone seems to struggle with even though it's a very simple concept.

    Many also don't know the extra memory requirements for 10GbE networking either, despite it being stated clearly in hardware vendor documentation (HP Quickspecs certainly mention this more than once).

    1. Anonymous Coward
      Anonymous Coward

      Yeah, but...

      whenever you will call for support, the vendor will ask you about it and they will be delighted to suggest you should install the app according to their specification and them come back again. Irrespective of being right or wrong, you will be in a bad spot. Your manager would prefer you to bring back the app from the dead before you sit down and talk to him about overspec'ing.

      Remember, son, the moment your vendor has cashed the money you're wasting his time unless you want to buy more from him.

  3. Anonymous Coward
    Anonymous Coward

    Don't forget

    Software vendors can refuse to support applications that aren't running on a VM that's meeting their minimum spec. It's easy to get around this of course but worth bearing in mind that you need to be sure you're not causing the issue before blaming their application.

    And when doing a P2V, a lot of the increased CPU performance often comes when moving from a physical server with CPUs that are a few years out of date. What used to take 4 cores at 2.5 Ghz often takes much less on a modern CPU.

    1. Lusty

      Re: Don't forget

      A few years out of date more often than not means a 4Ghz processor core which runs threads faster than all but two of the current Xeon range (when single core running under Turbo boost). This is why many apps run slower virtualised because most apps are single thread due to poor design.

  4. Aitor 1

    I agree, BUT

    My big BUT: if you have plenty of disk access and file access, you might have problems

    Also, the carrot and stick leaves users in the cold.

    I have seen both happen.. and as the budget owner decides to cut spending, now some processes that used to take 15 minutes take one hour or more. And, of course, that means timeout, so the users can't do these operations, and have to open a ticket to support, who then charge the company. when it works, they just lose about 2-3 hours a week, times 2000, 4000-6000 hours lost every week.

    On the bright side, normal operations are faster.

    With small systems, I had very good results.. no problem whatsoever.

  5. Peter Brooks 1

    Design, demand & capacity management

    Mainly the problem is poor design - something virtualisation encourages, along with poor demand and capacity management.

    These things are difficult, because they involve thought and careful analysis. So they just shove it all on virtual machines.

    What amazes me is how surprised they get when it costs a fortune and performs badly - precisely what you'd expect from that 'solution'.

    From the point of view of the massive waste of energy and money, it'd be interesting to know how much computing power is used to:

    - run exchange servers exfiltrating sensitive data back to the mothership

    - mirror databases pointlessly because they share the same SPOF, the virtual environment

    - run multiple linux consoles that nobody logs into

    - keep the Regin botnet expanding

    It must be many megaflops and petabytes.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like