back to article Amazon is saying nothing about the DDoS attack that took down AWS, but others are

Amazon has still not provided any useful information or insights into the DDoS attack that took down swathes of websites last week, so let’s turn to others that were watching. One such company is digital monitoring firm Catchpoint, which sent us its analysis of the attack in which it makes two broad conclusions: that Amazon …

  1. robidy

    The bigger they are...

    ...the further they fall.

    Surprised at this size and market share, they had to bring in a company to mitigate against a DDOS attack...

    1. Muscleguy

      Re: The bigger they are...

      Outsourcing is the current business zeitgeist and the Cloud is outsourcing on steroids. So why would you have it in house?

      1. robidy

        Re: The bigger they are...

        Generally outsourcing means you have something to outsource in the first place...

        1. Anonymous Coward
          Anonymous Coward

          Re: The bigger they are...

          route53 is WAY behind the competition

          compared to google who offer DNSSEC etc

        2. Anonymous Coward
          Devil

          Re: The bigger they are...

          Today everything can be outsourced, but the CEO and the Board....

          1. katrinab Silver badge
            Coat

            Re: The bigger they are...

            Not true. You can outsource the CEO and the board, and there are many companies that do this, especially in offshore locations.

      2. <script>alert('the register');</script>

        Re: The bigger they are...

        Because they are the outsourcer who provides DNS services... Wouldn't be an issue if they didn't but...

    2. Charlie Clark Silver badge

      Re: The bigger they are...

      To be fair to both Amazon and Neustar, bringing outside specialists is often exactly the right thing to do. Neustar does have experience in this area and I suspect we'll see more of this kind of attack as more and more "endpoints" get added.

  2. Chris G

    The trouble with clouds

    Is they seem to lack substance.

    1. DreamEater

      Re: The trouble with clouds

      Well that’s going to rain on their parade...

    2. Tim99 Silver badge
      Boffin

      Re: The trouble with clouds

      Not so fast. If we assume that a typical white fluffy cumulus cloud weighs about 500 tons (it does) or ~120 KiloJubs (~1 LINQ Hotel Recycling unit), it is pretty massive, but the mass is in a volume of about a cubic mile (normal size cloud). A bigger cloud could be ~2 cubic miles (or 1000 tons).

      I don't think that the existing Reg Standard Units give a suitably intuitive measure of large volumes with relatively low densities - Could I suggest the "FluffyCloud" for larger nebulous/amorphous masses i.e. I FluffyCloud = 500 tons/cubic mile - The standard volume of an Olympic-sized swimming pool is ~1.4 million to a cubic mile so the FluffyCloud is roughly 0.088 Jubs/Olympic-sized swimming pool (m/v)?

      1. Mike 125

        Re: The trouble with clouds

        >>I don't think that the existing Reg Standard Units give a suitably intuitive measure of large volumes with relatively low densities -

        And while we're at it, an RSU for DNSDDoS Density.

        (assuming frequency can be interpreted as density...)

        1. MrReynolds2U

          Re: The trouble with clouds

          How about a measure of reaction time in AZT (the mean Amazon response time to a fault).

          1 AZT = roughly equivalent to 5 hours of earth time or 36,000 NY minutes

  3. disk iops

    The S3 operations team monitors a zillion metrics but as the article noted, it's almost entirely "inside" activity. No doubt customers started complaining about bucket availability and there might have been metrics that showed a downward move in request rate that didn't jive with historical levels.

    The answer will be to either piggy-back off of specialist anti-DDOS providers but also to stand up arms-length availability metrics viewed from 'outside' as it relates to geographically distributed name resolution. Or more likely write their own DDOS implementation and embed it into the Route53 infrastructure. S3 front-end itself has request rate-limiting already. But I couldn't say how hardened it might be against a flood of malicious payloads.

    1. fidodogbreath

      The S3 operations team monitors a zillion metrics

      'Is private personal data still pouring in from Alexas, Kindles, and Fire TVs?'

      "Yes."

      'And can we still take orders on Amazon.com?'

      "OK, we're good then. Let's go get lunch."

  4. sbt
    Trollface

    Just as well JEDI went elsewhere!

    "These are not the DNS servers you're looking for."

    In other news, a temporary cease-fire took effect in the latest middle-east conflict as US forces were unable to access their fire control systems due to a back-hoe operator's error in West Virginia. The enemy used the outage as an opportunity to re-occupy areas captured in recent days by U.S. forces and bury their dead.

    UPDATED: The government has denied reports the back-hoe operator was recruited as an agent by the enemy however investigations into recent suspicious payments made into the operator's bank account are on-going.

  5. CrazyOldCatMan Silver badge

    "from its perspective everything was working fine"

    It's a variation on the developer whinging that "it works fine on my machine"..

  6. Steve B

    What has happened to the monitors?

    I used to help with a lot of network development and testing, helping quite a few companies to shape their switches and monitors, including some RMON devices.

    They used to send me their latest hardware and software to install and beta test.

    I moved on from that role to become an international corporate's IT Manager, but installed "non corporate" firewalls and monitors so that I knew what was happening on the network.

    Many of the companies I "worked" with have been hoovered up by the bigger players or fallen by the wayside, but there is no excuse really.

    If you are responsible for the well being of a major network which has impact upon business viability then you have to install proper monitoring mechanisms to let you know very quickly if a problem develops.

    Even then I got fed up with attacks so used reverse DNS and who is to go after the US based ISP.

    After the initial "nowt to do wi' us", the support chap realised that he was actually being attacked as well, but it had taken over one of his servers. A quick flurry of activity and problem solved. I later got an email to say that they had many servers susceptible which were in line for take over and they had now decided that instead of sitting back on server patches they were going to instigate a better maintenance schedule to minimise the chances of it happening again.

    I did not really appreciate the effect of my monitoring on our companies, until the corporate Finance Controller left to freelance elsewhere. On a keep in touch visit, I was told that they had not realised that big companies still had major IT issues as there had not been any in 4 years with us, but were a constant occurrence in the other companies they had been into.

  7. Anonymous Coward
    Boffin

    Eggs and baskets

    Although this attack looks like a transient problem, I think it's pointing at something bigger.

    The whole purpose of cloud computing is that people will outsource some things they are not very interested in doing to people who are interested in doing them, and in particular can do them for less money than the people doing the outsourcing can do. The way they achieve that is scale: the cloud people run very large environments and do it in such a way that the effort they need to run them goes up more slowly than the effort involved in running lots of little environments. This means that there are many fewer large environments than there would otherwise be, costs go down and everyone is happy.

    So far so good. But there's another aspect to this. Any serious failure in the relatively small number of very large environments now has a much larger effect than a corresponding failure in one of the previous large number of small environments. In particular there can be nasty correlated risks, of the sort that had famously bad consequences in 2007-2008: multiple organisations which seem to be independent and whose chances of failure are treated as if they were are, in fact, not independent at all, and all fail at once.

    Well, this would not be a problem if the small number of very large environments were all partitioned in such a way that failures of one part can't cause other parts to fail. But the whole reason for these things existing is scale, and scale means that you use techniques which can control these very large environments with a very small number of people. Scale means that, if you want to make some change, you push it out across the whole huge environment, or significant chunks of it, in one go, and in particular that you don't go around each of tens of thousands of the small chunks of the environment corresponding to each customer doing it on each one.

    And that's great, so long as you are very, very sure that the small number of central points of control for these huge environments are extremely safe: that it is not possible to push out some bad change by mistake, or that it is not possible for some bad actor to gain control of one of these central points and do so intentionally. And, of course, bad actors will be very, very interested in gaining access to these central points of control. And some of these bad actors are nation states, with the resources of nation states: they can run thousands of people, including people who might, for instance, get jobs in one of these places.

    And none of these platforms have existed for very long: AWS started in 2002, 17 years ago, and they've spent most of the time since dealing with huge rates of growth with all the problems that brings. How good are their security practices? Well, we know that in 2013 the NSA, who are meant to be good at the whole security thing, leaked a vast amount of information because of completely deficient security practices: let's hope that AWS are better than the NSA were in 2013.

    In practice this is all a time-bomb waiting to go off. The cloud people may be very good at security indeed, but their whole business model is based on scale and thus on central control of huge environments, and the terrible state of computing security in general combined with the huge prizes to be won by controlling such environments means that, in due course, there will be a compromise. And at that point everyone who shares the infrastructure behind the compromised environment is going to be in trouble. Let's hope that the people who try to work out when risks are correlated have actually done their job when that happens (hint: they haven't).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like