back to article Researchers find not all EC2 instances are created equal

Researchers from Deutsch Telekom Laboratories and Finland's Aalto University have claimed it is possible to detect the CPUs of servers powering at Amazon Web Services' (AWS') Elastic Compute Cloud (EC2), and that the fact the cloudy giant uses different kit in different places means users can select more powerful servers at the …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    What, in the age of cloudiness, is

    What is the current correct way to spel "pig in a poke"?

    Thing is, clouds aren't transparent. What did people expect?

    1. frank ly

      Re: What, in the age of cloudiness, is

      But, the cloud was transparent; the researchers saw through it to the CPU.

  2. stanimir

    Anyone actually expect the hardware is the same?

    ...across the entire datacenter? I am not sure how that deserves a paper on the subject.

    The hypervisors do not really measure CPU cycles and even if they did, they CPU cycles, caching, memory access latency and memory locality, stuff like cmpxchg is different for the different hardware.

    Of course, Amazon will sell under the same brand name older or newer machines. What you get is matter of luck. If one really needs to know what exactly the hardware is buy non-virtualized (dedicated) servers and do as you please, incl. virtualize 'em yourself.

    1. g e

      Re: Anyone actually expect the hardware is the same?

      You'd hope performance would be equivalent regardless of the hardware.

      I have two micro instances, cpuinfo lists one as Xeon, one as AMD64 - when you do an apt safe-upgrade one is noticeably faster than the other, by maybe 50%. I'd previously assumed it was because one was running on free tier so you get a bit less bang for your buck but apparently not...

      It does mean that if you're redeploying instances to other zones that you can't expect the same web-serving capacity as the machine you copied the instance from which is less than good.

      Both my instances are in the same zone, too

      1. richardcox13

        Re: Anyone actually expect the hardware is the same?

        > You'd hope performance would be equivalent regardless of the hardware.

        Not really, I would expect performance to be no worse (and often better) than the minimum specified for the type of VM.

        Ie. you get at least what you pay for, but you might get lucky.

        Of course those faster, more modern, host machines could be getting a higher density of VMs loaded on to them. So a faster CPU might just mean more users sharing it so – on average – you end up getting the same availability of CPU instructions executed per unit time per VM.

      2. Andy Barker

        Re: Anyone actually expect the hardware is the same?

        Micro instances are the worst ones to compare, as their performance is not guaranteed ("up to 2 ECUs"). We sometimes end up with Micro instances where the % Stolen time is huge, and the instance seems all but dead for a while.

  3. Anonymous Coward
    Anonymous Coward


    they don't rip and out replace hardware as demand for usage increase, who'd a thunk it?

  4. Dr_Cynic

    Flaw in their logic

    Unless , in addition to detecting the processor type, the researchers also ran benchmarking tests on the different instances their conclusions are based on probably flawed assumptions.

    It is unlikely generally that any individual instance has exclusive access to a cpu, the total demand on the service will be spread around, higher specified systems will have more individual instances than the older lower spec kit.

    There may be a slight advantage to being on the 'better' systems but I doubt it justifies the cost of actively searching them out , you are probably better just picking the geographically closest centre.

    1. stanimir

      Re: Flaw in their logic

      There is (usually) benefits of using newer/modern hardware depending on the workload even if the CPU handles more virtual cores. Improvements in branch prediction is a major one, better locality (i.e. L2 closer to the main chip), improved CAS (locked cmpxchg) latency, TSC register sync. amongst the cores and the OS can actually use it, etc.. It's not just the freq the CPU runs on.

      On some workloads the hardware may exhibit different behavior even when the avg. load is similar. However all that is pretty standard and I just can't see it as newsworthy or warranting a whole paper on.

  5. Silverburn

    I propose a new operating model...

    ..One where you buy your own kit, and stick it in a DC (your own or 3rd party). That way you know what you're getting, and when it's running, and the costs are fixed. Use your own DC, and you even get added security thrown in.

    Surprised nobody has thought of this before....oh wait.

    1. M Gale

      Re: I propose a new operating model...

      Because we all have the money for co-location

      Anyway, so long as Amazon are charging per instruction or however you measure CPU time these days, who cares? Your instance is slower, so it costs less per hour, surely?

  6. sapidmolecules

    Stop comparing Amazon with your DC

    Amazon SLA defines each of their instances in ECUs. Sometime ago it was equivalent to a 2Ghz Athlon core.

    Any performance variance is more than what one has paid for.

    And, no: the processor has nothing to do with the speed of an instance. You got hosted in the same rack of some fanatic about big-data and you are toasted. Regardless of the cool new CPU you might have obtained.

    So, the "study" can only affirm that Amazon uses different processors and that it maintains its machines somewhat updated. Whenever performance is a concern Amazon sells Reserved Instances but, not surprisingly, the price is on the high side.

    Stop at comparing Amazon with an on-premise infrastructure. The Cloud is a model that allows fast deployment and better scalability (then add your business reason here).

    - It doesn't cost less

    - It doesn't do everything

    - It will never solve the issue of data confidentiality but, eventually, it will make us get used not to worry about it.

    1. Silverburn

      Re: Stop comparing Amazon with your DC

      - It will never solve the issue of data confidentiality but, eventually, it will make us get used not to worry about it.

      Not if you work for a bank. If anything worrying *more* is the current preferred approach.

      1. Michael Duke

        Re: Stop comparing Amazon with your DC

        Not that many banks will be using EC2 to host apps that hold customer or financial data I would be guessing.

  7. Anonymous Coward
    Anonymous Coward

    Fast instances and slow instances

    "In general, the variation between the fast instances and slow instances can reach 40%. In some applications, the variation can even approach up to 60%. By selecting fast instances within the same instance type, Amazon EC2 users can acquire up to 30% of cost saving". link

    Obviously if everyone tried to game the system then there is no advantage, wouldn't a simpler solution be to acquire a second instance for more intensive jobs?

    Aws Pricing

    1. Lordbrummie

      Re: Fast instances and slow instances

      Not all instances are the same, it didn't need a "paper" to highlight this.

      There are sites like that show the relative performance of cloud provider's instances, the difference for AWS across their own zones is interesting.

      The information has been there for a while, and no, I don't work for Cloud Harmony, but the price/performance of cloud instances is not always apparent, just because something is cheap doesn't mean it's value for money.


  8. Nate Amsden


    Yes you have been able to cat /proc/cpuinfo for some time now (probably the whole time, at least for the past 2 years that I used *ugh* EC2). But my life is less stressful now that I haven't had to deal with EC2 in months now.

    About a year ago I was rather amused one of the VMs I had in EC2 was running on an Opteron CPU circa ~2006 if I remember right.

    My favorite feature of EC2 with their CPUs is when you hammer their VMs more often then not you lose upwards of 30% of your CPU capacity (in my experience at least), and Linux is nice enough to report this as "CPU Steal" in various tools.

  9. Anonymous Coward

    Compute Units

    Amazon are aware of this which is why they charge you by the 'EC2 Compute Unit'. Seriously - what are they supposed to do? Ensure that *every* box uses the same CPU? Throw all the old boxes away and replace the entire cloud every time they need to upgrade one box? Intentionally cripple CPUs in case someone gets a little extra..?

    "Amazon EC2 uses a variety of measures to provide each instance with a consistent and predictable amount of CPU capacity. In order to make it easy for developers to compare CPU capacity between different instance types, we have defined an Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed in terms of these EC2 Compute Units. We use several benchmarks and tests to manage the consistency and predictability of the performance of an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation. Over time, we may add or substitute measures that go into the definition of an EC2 Compute Unit, if we find metrics that will give you a clearer picture of compute capacity."


  10. Anonymous Coward
    Anonymous Coward

    Been there, done that...

    Uncertainty Principle of Cloud Computing - or, you never really know what you are getting for your money.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2020