back to article IBM runs OLTP benchmark atop KVM hypervisor

IBM has performed benchmark tests that provide some clarity on how transaction processing will perform in real-world virtualized environments – today's real world, that is. One of the things that El Reg complained about when the Transaction Processing Council (TPC) trotted out its TPC-VMS server virtualization benchmark …


This topic is closed for new posts.
  1. boatsman


    the reason for most of us to employ virtualization is multiple workloads with a mix of charcteristics on a single box. Which causes strain on cache efficiency of the cpu, memory - cpu transfer, etc.

    tpc does not reflect that situation very much... the ibm benchmark is imho thus interesting, in that it proofs you do not *have* to go for bare metal for proper performance......

    however, the list price tag of vmware ( KVM being almost non-existent in enterprise use)

    and to a lesser extent Xen-from-citrix, might still be a good reason to not virtualize such a tpc oltp workload...

    I wonder (didnt check) what stops vendors from doing the tpc vm benchmark or cook one themselves, that properly reflects mixed workloads in a slightly realistic setting :-)

    a beer is in order, I think :-)


    1. Matt 21

      Re: alas,

      You've got a point. In my experience the basic difference is closer to 18% but the VM option suffers even more if there are other workloads active at the same time. In some cases I saw a performance hit of 30%.

      I think the bottom line is that if you care about database performance you don't put them in a VM.

  2. This post has been deleted by its author

  3. Hard_Facts

    3x faster transaction response time in tpc over VM

    While 8-10% lower transaction volume is understandable when benchmarked on VM -- How come this TPC on VM delivers almost 3X better transaction response time ? I mean with all the virtual IO latency in VM, I thought transaction response time over VM will be slower.

  4. Anonymous Coward
    Anonymous Coward

    Backplane network

    It all depends on the config, but if the database is on the same box and the connection between VM's is a virtual network running at backplane speeds of over 20GB/sec, database access has very little latency, hence a little better transaction performance. VM to VM performance is always pretty good and if the VM's are running directly off SSD's in the box as opposed to over the network from a SAN, that helps too. Everything is local and very high speed.

  5. Anonymous Coward
    Anonymous Coward

    has anyone done such a test

    >> It would be so much more fun if companies like IBM would just test a bare metal machine and then run the same workload on the same exact iron using VMware's ESXi, Red Hat's KVM, and Microsoft's Hyper-V hypervisors. But again, that would be too easy and might help companies make odious comparisons and purchasing decisions.

    Has anyone benchmarked this independent of the vendors? Love to see the comparison...

    1. asdf

      Re: has anyone done such a test

      My assumption is with TPC and most other enterprise benchmarks is even if you buy them you are under NDA contractually not to release the results (that way each company has to buy the benchmarks). I could be wrong but that is my guess.

    2. Dave 107

      Re: has anyone done such a test

      I doubt the license agreements for VMware would allow such public comparisons.

This topic is closed for new posts.

Other stories you might like