back to article Amazon veep: We tweak our cloud code every 16 seconds – and you?

A top Amazon bloke has scoffed at rivals who claim they can build Amazon Web Services-like systems in their customers' private data centres. Stephen Schmidt, Amazon Web Services security veep, bragged during the London leg of Amazon’s worldwide AWS Conference tour on Tuesday that these private clouds are years behind his …


This topic is closed for new posts.
  1. Ken Hagan Gold badge

    "We tweak our cloud code every 16 seconds – and you?"

    Am I the only person who reckons this is something they should be keeping quiet about? What's a tweak? Does it mean they are finding bugs every 15 seconds? Even if these are features we're talking about, doesn't it still mean the behaviour of the system is unreproducible/unpredictable on timescales longer than 15 seconds? Is that a good thing?

  2. sabroni Silver badge

    The Amazon cloud tweaks its software every 16 seconds

    And that's supposed to be good why exactly? Are they finding bugs at nearly 4 a minute or are they just faffing around with stuff that already works? Or does "tweaks" mean insignificant changes?

    Yet more MS style "Big list of stuff = best product" thinking.

    There were no posts here when I composed this, honest! I didn't consciously copy Ken!

    1. Tom 13

      Re: The Amazon cloud tweaks its software every 16 seconds

      I believe you. I'm late to the party but that was my very first thought on reading the headline. And the article did nothing to mitigate that thought. If you're upgrading that fast, your product isn't mature. Do big data players REALLY want to trust their data to systems that aren't mature?

  3. Christoph

    I wouldn't go anywhere near a 'service' that changed the code every 16 seconds!

    Besides anything else, how would they track back to what caused a fault?

    If a new fault appeared, how would they identify the change that caused it?

    How could they possibly check each version before live deployment?

    How could they ensure that two successive changes didn't clash with each other?

    Et cetera, for may many problems.

    1. David Dawson

      These things can all be automated away. It really is possible to do deployments this often, including full regression testing.

      As already noted, if you have hundreds of components, which I'm sure they do, these deploy schedules aren't particularly heavy.

      Internally, Amazon and AWS use web services heavily. In this instance that means that there are hard contracts for using services, each service expects to be abused, and you can have multiple versions of an API in use at any once time.

      This gives a huge tolerance in the system for change.

      They have also obviously invested very heavily in serious amounts of automation. They certainly will be able to throw up environments simulating full data centres for regression testing.

  4. A Non e-mouse Silver badge

    I wonder if their change every 16 seconds is the average rate of commits to their version control system? (Which could mean anything....)

    1. The Mole

      It's also not clear if that means they are producing new builds every 16 seconds, or just that because they have so many servers deploying the latest build means every 16 seconds a new server is updated from the previous version.

      Of course there is also a fact that Amazon doesn't have 'a software' it is a very large infrastructure of probably hundreds of different applications and components developed by thousands of engineers. Each component probably has its own independent cycles, if there were 200 components that would mean each one is deployed every 2 days or so. With continuous deployment and lots of resources for automation testing that is not quite such a fast rate - though still pretty scary in terms of reproducibility of issues.

  5. Phil Dalbeck

    I suspect it's a reference to average lines of code committed in changes or something equally skewed.

    AWS isn't a magic bullet - the costs of hosting infrastructure on the platform versus leveraging private cloud on an in house hardware setup are significant, amazon aren't giving it away for free.

    Also, there is a rising sense of concern that AWS is a closed source platform - if your not careful, it's very easy to paint yourself into corner and make moving elsewhere a real challenge.

    I doubt amazon would be waxing so lyrical on the topic if they didn't regard open stack, cloud stack etc as genuine fuel for competitors to their defacto dominance in the public and private cloud industries.

  6. NoneSuch Silver badge

    To make it even simpler.

    Amazon is a US company subject to the FISA court.

    What is the price point where you'd put confidential corporate information on AWS?

    1. Anonymous Coward
      Anonymous Coward

      Re: To make it even simpler.

      Do you think that the US will start (or already have) discretely sponsoring the likes of Amazon to help drive down prices and ensure data is available for inspection ?

      I do.

    2. ARP2

      Re: To make it even simpler.

      Any company with even a small presence in the US, must turn over ALL the data that the government asks for, even if that information is held by an affiliate. The same goes for every other country. So, unless you want to avoid the US market completely, even the most data privacy friendly companies are subject to the whims of government investigations.

      1. El Limerino

        Re: To make it even simpler.

        Any company that has even the slightest connection to the US is also subject to US laws, as European online gambling companies found out then their executives in transit via the US (not even entering the country) were arrested and jailed there for violating US gambling laws.

        Any company who wishes to sell in the US is subject to those same laws. Which is just about anyone of any size and scale, because the US market is so valuable.

        The net: you're kidding yourselves that EU-based companies are somehow magically immune from US laws if they sell there.

    3. rh587 Silver badge

      Re: To make it even simpler.

      Yup. Your home rolled system may be two years behind the curve but if it's stable, secure (relatively speaking) and in your own building, then the NSA either has to hack it or come and physically seize your hardware.

      If you're in the UK/France/Germany you're subject to British/French/German jurisdiction and only that jurisdiction. Not automatically subject to American jurisdiction via Amazon/Google/Microsoft/cloud of choice.

      The Americans can ask but will have to pitch up to a local court like everybody else (or break in and extraordinarily render your gear. And BOFHs have ways to deal with pesky people like that *kzzrt* ).

      If you're on AWS they'll get, regardless of what a British/French/German court has to say about it.

  7. Destroy All Monsters Silver badge

    Demanding this story with an XBox-kidded Bezos

    "The great thing about Amazon tweaks isn't rolling them out every 16 seconds, it's showing everyone online that we did"

  8. This post has been deleted by its author

    1. Cliff

      Or check-ins across all product groups. If they were building and deploying four times a minute I'd be pretty freaked out, that seems out of control. If code was being checked in continuously then this sounds a lot less insane, even if the builds were daily.

  9. Martin Ryan

    how many?

    "He claimed Amazon added 280 new AWS features and services in 2013"

    This means one new feature &/or service every 112,706 seconds. Which is quite different to one change every 16 seconds. In fact it is over 7000 times different. So the other 6999 changes are what exactly?

    1. passportholder

      Re: how many?

      Bug fixes?

  10. Anonymous Coward

    Tweaks every 16 seconds....dead easy in a live, large datacentre.

    Drive fail, rebuild RAID, drive fail rebuild RAID, drive fail, rebuild RAID.

    See a tweak every 16 seconds.

  11. Anonymous Coward
    Anonymous Coward


    Wow we're partying like it's 1999 in here. Allow me to update your gramaphones.

    1. Continuous deployment to production is *highly desirable*.

    Big bang deployments dump multiple changes into prod at once, vastly increasing your risk area. If Amazon are deploying every 16 seconds, given their admirable (though not perfect) uptime record, they clearly have a robust deployment pipeline, testing framework and rollback system.

    2. Frequent commits to Version Control are *highly desirable*.

    If you are using a modern DVCS such as GIT or Hg, frequent atomic commits allow code changes to be accurately tracked and reported. Perhaps it's time to upgrade from your VCC instance?

    DevOps (unlike Agile) is something IT Professionals should be paying close attention to, it has tangible and immediate benefits.

    1. Destroy All Monsters Silver badge

      Re: 1999

      I was with you, but then...

      > unlike Agile

      gb2 waterfall horror

  12. Mark C Casey


    I guess that explains why their AWS and various sub-sites are such a convoluted mess then.

This topic is closed for new posts.

Other stories you might like

Biting the hand that feeds IT © 1998–2022