back to article Basecamp details 'obscene' $3.2 million bill that caused it to quit the cloud

David Heinemeier Hansson, CTO of 37Signals – which operates project management platform Basecamp and other products – has detailed the colossal cloud bills that saw the outfit quit the cloud in October 2022. The CTO and creator of Ruby On Rails did all the sums and came up with an eye-watering cloud bill for $3,201,564 in 2022 …

  1. Pascal Monett Silver badge

    "Most of that spend – $759,983 – went on compute"

    No it didn't.

    $760K out of $3+ million is definitely not most of that spend.

    Now, 8 petabytes of data is nothing to sneeze at, that's for sure. But, if you have a budget of over $260K/month, you have enough money for servers, a local SAS implementation running over 100Gbps fiber and money to spare for the aircon bill and fire suppressing installation.

    Apparently, The CloudTM does not give you any savings on the IT personnel budget, since there is still need of a bunch of skilled admins to oversee it properly, so you have the people required to manage all that locally.

    I'm curious to see next year's local bill and find out just how damning it is for The CloudTM.

    1. Sora2566 Bronze badge

      Re: "Most of that spend – $759,983 – went on compute"

      When the Reg says "most of that spend", I suspect they meant "the biggest chunk of that spend".

      1. ssharwood

        Re: "Most of that spend – $759,983 – went on compute"

        Nah I just stuffed up. Storage is the largest single line item. Compute comes close. Edited. And strapped self to arse-kicking machine.

        1. PhoenixKebab
          Thumb Up

          Re: "Most of that spend – $759,983 – went on compute"

          "... arse-kicking machine."

          Another example where local hardware wins over the cloud.

          1. breakfast Silver badge

            Re: "Most of that spend – $759,983 – went on compute"

            The arse-cloud machine is a whole different thing.

            1. F. Frederick Skitty Silver badge

              Re: "Most of that spend – $759,983 – went on compute"

              Carbon dibaxide.

          2. Anonymous Coward
            Anonymous Coward

            Re: "Most of that spend – $759,983 – went on compute"

            But it's an Amercian company - wouldn;t they have an ass-kicking machine?

            1. werdsmith Silver badge

              Re: "Most of that spend – $759,983 – went on compute"

              But it's an Amercian company - wouldn;t they have an ass-kicking machine?

              The ASPCA would have something to say about that.

            2. Anonymous Coward
              Anonymous Coward

              Re: "Most of that spend – $759,983 – went on compute"

              Typically. Though some of us understand "arse" okay, and have learnt to spell "fibre".

              Monty Python might get some credit for the former, probably not the latter.

        2. Anonymous Coward
          Anonymous Coward

          Re: "Most of that spend – $759,983 – went on compute"

          "And strapped self to arse-kicking machine."

          Nice, because that normally costs $100 to $1000 and hour from a "practitioner".

          1. benderama

            Re: "Most of that spend – $759,983 – went on compute"

            How much for someone who knows what they're doing and don't need to practice any more?

        3. sketharaman

          Re: "Most of that spend – $759,983 – went on compute"

          LOL "strapped self to arse-kicking machine" is the new "fell on my sword" or self-flagellation:)

    2. Mobster

      Re: "Most of that spend – $759,983 – went on compute"

      What about replication in a geographically distant location like they are doing in the cloud?

  2. DougMac

    And how much is the bill for the electricity, cooling, datacenter infrastructure, security, compliance, local sys admin staff that you also get in the cloud bill?

    1. T. F. M. Reader

      I asked myself the same question half way through the article, but as their alternative solution is a managed hosting service, just not on AWS, at least some of that is there, too. Would be nice to see a more detailed comparison. Any chance of seeing the cloud vs. local hosting provider margins? Yes, yes, too much to ask, got it.

    2. Tom 38

      Those are chunky machines he listed, for the compute I'm sure they'd do absolutely fine. However, 8 PB storage today - presumably growing, so lets allow an extra 25% for growth, 10 PB - and you're going to what, run Ceph for it? Ceph doesn't like to run at more than 80% capacity, so that's 12.5 PB.

      To replicate what they have in AWS, multi AZ and multi region, that means the data is at least in 4 places. Can you really build and maintain four 13 PB Ceph clusters for a million dollars a year? If you have 2 engineers looking after it, that's <$600k per year left of that budget.

      I look forward to seeing his final numbers next year!

      1. Stu J

        Exactly - this very much reads like he's not comparing apples with apples, and that seems to be the issue I see frequently with these Cloud cost comparison arguments that end up in favour of on-prem.

        If you put the level of resilience in place that e.g. AWS have, and implement something akin to Multi-AZ and Multi-Region, then it's going to cost a lot of money, whether it's on-prem or in the cloud.

        And if you skimp on the resilience on-prem, it can really come back to bite you hard - ask Channel 4 and Red Bee Media:

        Some CTOs just like to make noise to build their profile, this guy's absolutely no different.

        1. thondwe

          24x7 support staff

          Skilled 24x7 support staff is another cost - is Basecamp as "Global" as Amazon/Azure - ye oldee follow the sun thing.

          Hopefully the published figures will be complete and show what bells and whistles have been sacrificed to make this happen!


          1. Anonymous Coward
            Anonymous Coward

            Re: 24x7 support staff

            Some of this presumes that AWS/Azure 7x24 support staff are paying attention to your stuff as a priority, and knowledgeable enough to support it.

            I have no strong feelings either direction. But I will say that regardless of the cloud (or hosting, if you go that route) provider's staff, IME you still need your own staff to look after your operations. Obviously in the case of AWS et al your staff won't be swapping failed drives etc., so some savings on staff for that sort of activity.

        2. Ken G Silver badge

          You don't need multi-region unless you're running time sensitive apps world wide. Multi AZ, yes you need more than one data centre, redundant networking providers and routes, power etc and, of course, separate teams to run them. If you go to a hosting provider with that already set up then it doesn't costs much more than twice the hardware.

      2. Nate Amsden

        8PB is a lot, but it's not really for object storage. HPE Apollo 4510 is a 4U server that can have up to 960TB of raw storage(so ~10PB per rack, assuming your facility supports that much power/rack). Depending on performance needs - 60 x 16TB drives may not be enough speed for 960TB by itself. Probably want some flash in front of that(handled by the object storage system). Of course you would not be running any RAID on this, data protection would be handled at the object layer. Large object storage probably starts in the 100s of PB, or Exabytes.

        There's no real need to use CEPH which is super complex(unless you like that). Probably better to be using something like Cohestity or Scality (no experience in either), both available for HPE Apollo(and other hardware platforms I'm sure). There are other options as well.

        I think I was told that Dropbox leveraged HPE Apollo or similar HPE gear +Object storage software when they moved out of Amazon years ago. As of 2015 Dropbox had 600PB of data according to one article I see here.

        I'm quite certain it would be easy to price a solution far less than S3 at 8PB scale, even less scale. You also don't need as much "protection" if you choose proper facilities to host at. Big public cloud providers cut corners on data center quality for cost savings. It makes sense at their scale. But users of that infrastructure need to take extra care in protecting their stuff. Vs hosting it yourself you can use the same model if you want, but if you are talking about 8PB of data that can fit in a single rack(doing that would likely dramatically limit the number of providers you can use to support ~20kW/rack? otherwise split into more racks), I would opt for a quality facility with N+1 power/cooling. Sure you can mirror that data to another site as well, but no need for going beyond that (unless geo latency is a concern for getting bulk data to customers).

        1. Tom 38

          8PB is a lot, but it's not really for object storage.

          We've already established 8 PB is the size of their current data, not the size of the storage system they'd need to spec. So lets say you don't have multi-AZ multi-site, you just stick to 2 DCs, with each object stored twice for resilience (too low for my liking, but lets get a back of the envelope estimate here).

          Lets assume also that you don't need any flash in front, just spinning disks. Lower estimates, right?

          So, 10 PB, don't want the disks at more than 80% capacity, 12.5 PB, objects stored twice - 25 TB. That's 1600 16TB disks per cluster. Two sites, so 3200 disks. I don't know the cost of an enterprise grade 16 TB disk, but its going to be between $300 and $400?

          Just on disks, we're spending an up-front of $0.96m and $1.28m. We've also got to have 54 of these HPE Apollo 4510, $4-5k? $0.22m - $0.27m.

          So the budget solution, giving less availability as AWS, up-front we need $1.2m - $1.5m, plus support contracts, plus racks, plus power, plus hosting, plus switches.

          And, as you say, to get performance, we're probably going to need additional SSDs, and people to install, test, manage, and maintain this system. Getting the system in-place is going to require people with real knowledge of setting up and configuring these systems to give the performance the business needs.

          Dropbox is a good example - they're a storage company. Storing things is what they do. It makes sense for Dropbox to do that in house, its their business. Is getting good at storing files Basecamp's business? Looks like it will be!

          1. Nate Amsden

            Getting good at storing files doesn't have to be basecamp's business.

            People seem to jump to the end conclusion, either you build everything yourself, or you use a public cloud, and I just don't understand why. There is a massive chasm of space in between those two options in the form of packaged solutions from vendors like HPE, Dell, and others. Many different tiers of equipment hardware, software and support.

            1. A Non e-mouse Silver badge

              The arguement being made is that the basic hardware (discs & disc chassis) are still coming close to the cost of the cloud solution.The cost of the staff to build & run a DIY solution is greater than zero; The extra cost for a packaged solution over the bare hardware is greater than zero. In either case, you're still spending more to do it on-prem than you are by paying a cloud vendor.

          2. Nuff Said

            Yes, but...

            Taking your estimates as they stand (no compression or de-dupe?), you say "we're spending an up-front of $0.96m and $1.28m". The key phrase is "up-front". Year 2 costs onwards are a fraction of that, so over a five year period that's less than $0.5m.

        2. hoola Silver badge

          8PB is perfect for object storage. There are plenty of solutions out there using vendor rolled appliances or reference architecture on the likes of HPE.

          The Apollo hardware is perfect for object store.

          Pretty much any cloud storage is going to be underpinned by some form of object storage, it is the only thing that works at these sorts of scales.

      3. Anonymous Coward
        Anonymous Coward

        That maths is way off

        Is exactly the AWS-fanboys who trot out this completely wrong storage calulation and end up screwing their own company, with false financial justifications.

        Direct attached storage in on-prem servers is not for storing blobs on a NAS-like server it's for implementing SD-Storage like PowerFlex which which is vastly more efficient and resilient. Don't need multiple copies of the data in the same way resilience zones work and free to take advantage of de-dupe, compression, pointer based snapshots, erasure coding etc. 1TB of NVE could be worth 100 TB in those scenarios.

        Either the fanboys don't know this or they conveniently forgot to tell their superiors when doing a business case!

        1. Claptrap314 Silver badge

          Re: That maths is way off

          Resilience. You might want to look that term up.

          Certainly, deduping is/can be a major save, but if BaseCamp is even 1/4 as good as DHH says, they are already doing that in S3.

          1. Peter-Waterman1

            Re: That maths is way off

            If you want to keep apples for apples, then run dedupe on AWS as well.


          2. Peter-Waterman1

            Re: That maths is way off

            Holdup, what about backups....?

    3. roynu

      More numbers

      I have some approximate numbers from a different case:

      I have a little more than 1000 CPU-cores and 20 GPUs across two locations that totals about $400k per anno excluding personnel cost (but including hardware, data center rent, cooling, power, communication)

      I also have about 200 vCPU in public cloud that runs around $300k per anno. No GPUs and much less storage.

      No matter how you spin this, I’m seeing less than half the cost in private cloud compared to public cloud for a comparable amount of resources with comparable resiliency.

      1. Claptrap314 Silver badge

        Re: More numbers

        2 location is NOT resilient.

        Source: I learned SRE at Google.

        This was for our OWN products, not for GCP. Cloud haters SEVERELY underestimate the cost of actually delivering 4 nines. (Note: Azure does not appear to be capable of delivering 4 nines.)

        Of course, Google's systems were engineered for 5+, not just 4. But few businesses actually need 5.

        If your business does not need 4 nines, then yes, you might be able to save money by dropping down to only having your data and systems in two places.

    4. Mr.Nobody

      Still peanuts compared to public cloud costs.

      I work at a holding company with several saas applications that have been acquired over the last decade or so. Some are traditional design hosted in a colo (three tier web/app/db), some are shiny k8s/RDS implementations in the public cloud.

      The costs for the systems in Public Cloud are at a minimum 5x what the same systems cost in the colo. The colo costs are one of the smallest spends in the colo equation.

      Yes, we still need experts in storage, networking, computing, OS, etc. But these same individuals are put to work on the cloud stuff as well. No big surprise that the devs that put things in AWS don't think much about backups, DR, what's really required for HA, etc. Even if everything moved to the cloud, the same people would have jobs doing work in the cloud instead. Maybe a few would leave because it's not their skillset and they would do better elsewhere, and then you need to find new people that are cloud familiar to work on things, and they are not cheap.

      Last and most importantly from a cloud vs. colo cost perspective, the cloud costs generally rise in a linear fashion. Add a new customer that needs more compute and storage, it's going to rise in a linear fashion. There is a greater economy of scale to be had in the colo. There is generally more slack in the compute and storage systems there, so it could cost nothing. Even adding a few servers and another shelf isn't going to break the bank when cloud is 5x more overall.

      We are looking at more ways to move things out of cloud and back into colo. It just costs far less, and we already have the people.

      As to the whole notion that AWS/Azure have AZs that recreating would cost a fortune do in a colo, but he colo model vs. the public cloud model are not the same. The colo is designed so that nothing is on hardware that isn't redundant or a cluster that cannot survive multiple host failures. There should not be site downtime at a colo. They all generally have blended internet connections as well.

      AWS will tell you straight up that an app should be designed for multi-az fail over. #1 This sounds easier than it is, #2 - if a whole AZ goes down and all these apps try to start up in another AZ, it will be a small disaster. Storage in the other AZs will be under tremendous strain, and compute may be unavailable. This has already happened several times in the past decade with AWS, and everyone seems to forget about it. Heaven forefend one of these AZs is lost for several weeks, or forever.

      In a colo environment, we have a DR site hundreds of miles away from production sites. Yes, it would take a bit to get it up and running, but it's there if we need it. The compute and storage is all dedicated to our company. We take our old production hardware and set it up as DR. It costs nothing, and since it isn't on, we don't have a bill from the colo company for electricity or power.

      1. roynu

        My experience with a group of SaaS companies, with a somewhat similar sounding portfolio, is the same as yours. In some cases the same app is hosted on public cloud for some regions and private cloud for other regions. Public cloud is significantly more expensive in every case, although to be fair, personnel requirements are somewhat lower.

        In general SaaS companies should find that the strength of the private cloud business case closely follows the scale of operations.

      2. Claptrap314 Silver badge

        How does your "blended internet connection" do when the electrical substation feeding it dies? You don't get 4 nines (let alone 5) if you don't have automatic failover when a facility goes down.

        Look, if you don't need 5 nines (and most don't), then fine. If you've got a good enough team, and a large enough operation, you can save money by not doing the things that AWS & GCP are doing to deliver. And if you DO need 5 nines, then yes, you need someone who knows what it takes (even on AWS or GCP) to make it happen.

        Hint: there is NO WAY to confidently deliver 5 nines unless you are in three geographically dispersed data centers. Each data center being capable of carrying 130%-150% of a black swan event with 10-second failover. The magic of SRE means knowing how to deliver that WITHOUT tripling all of your costs.

        For any operation, there is a crossover point where the size of the operation justifies bringing the whole thing in house. But you must properly account for staff and resilience to judge that. You cannot just eyeball hardware costs and say "that's too high".

    5. Anonymous Coward
      Anonymous Coward

      Sounds like they're already paying for (at least some) staff to look after the cloud stuff, so that might be a partial wash.

      On that specific note, I think too many people (or management) are too eager to buy into The Cloud, with the prospect of cutting IT staff. From what I've seen, that's a large mistake. Cloud or concrete, metal or virtual, you always need somebody to look after things.

      Anyway, fair point about the watts and btu etc. Does AWS et al break-out those costs anywhere? My own (admittedly very limited) use of EC2 does not, but I'm not paying for any extended accounting or monitoring plans.

  3. Nate Amsden

    nice to see

    Nice to see them go public about this. Not many companies are open about this kind of stuff. Another one I like to point out to people (but with far less detail, mainly just a line item in their public budget at the time is this . They don't call it out in the article text, but there is a graphic there showing their budget breakdown and their cloud services taking between 21-30% of their REVENUE(with cloud spend peaking at $7M), and you can see in the last year they were moving out as they had a data center line item.

    I moved my last org out of cloud in early 2012, easily saved well over $12M for a small operation in the decade that followed. I had to go through justification again and again as the years went on and new management rolled in(and out) thinking cloud would save them money. They were always pretty shocked/sad to see the truth.

    Previous org to that I proposed moving out but the board wasn't interested(but everyone else was including CEO and CTO but not enough to fight for the project), they were spending upwards of $400-500k/mo at times on cloud(and having a terrible experience). I left soon after, and the company is long since dead.

    You can do "on prem" very expensive and very poorly but it's far easier to do cloud very expensive and very poorly.

    1. steviebuk Silver badge

      Re: nice to see

      We had someone like that in management who wanted to make it his pet project to go full cloud. Only one consultant has been honest that its more expensive. Now the place is making people redundant to save costs. When they could just save it on his cloud idea bollocks.

    2. Code For Broke

      Re: nice to see

      At my last gig, I proposed and the oversaw implementation of a plan to actually take the business from on-prem to no-compute. We realized we were spending millions on computers just to make life easier for the lowest skilled employees. So, we sent pink slips to that lot, sold the servers, desktops and laptops on eBay, and these days the only budget line that has gone up is paper. It was a bit more of an increase than I expected, but well within the budget. All told, we pocketed about $11T US.

    3. A Non e-mouse Silver badge

      Re: nice to see

      I had to go through justification again and again as the years went on and new management rolled in(and out) thinking cloud would save them money

      And by the sounds of it they listened to you. Kudos to you for writing a sound business case and kudos to your management for listening to you & your evidence.

      Beers all round.

    4. Anonymous Coward
      Anonymous Coward

      Re: nice to see

      > You can do "on prem" very expensive and very poorly but it's far easier to do cloud very expensive and very poorly.

      I think that puts it rather well.

      I.e. the "cloud is easy" pitch can cut both ways; easy to just get something going, quickly, and more of them as needed. Equally easy to spin up lots of the wrong thing (just keep clicking, eh), or the less-than-efficient thing (e.g. more/bigger/faster than needed, because the developer didn't know or care) and run up a big bill.

      Outfits with more predictable / stable growth likely won't see the cost:benefit to clicking a new cloud datacenter into existence. They simply don't need to pay for that kind of flexibility (elasticity) you get from cloud.

      1. John H Woods Silver badge

        Re: nice to see

        Exactly - economy of scale is balanced with diminished returns. A big enough (and almost certainly multi-site) company with a predictable compute load will find that outsourcing all that to $CLOUD_PROVIDER will probably not generate enough efficiency savings to pay $CLOUD_PROVIDER's margin.

        Naturally the cloud providers are keen to net the biggest organisations (especially governments) because they have the deepest pockets. Yet it is precisely these organizations who stand to benefit least from cloud, as they already have the budget, the real-estate and the staff to do it nearly as cheaply as the cloud provider can.

  4. A Non e-mouse Silver badge

    Cloud Vs On-Prem

    Each has their use case:

    For bursty workloads, cloud is probably a more cost effective solution.

    For constant load tasks, on-prem is probably more cost effective.

    I think the "word on the street" is that companies need a mix of solutions in their pocket because there isn't one ring to rule them all.

    1. Nate Amsden

      Re: Cloud Vs On-Prem

      Should do the math for how bursty is bursty. At my last company I'd say they'd "burst" 10-30X sales on high events, but at the end of the day the difference between base load and max load was just a few physical servers(maybe 4).

      IMO a lot of pro cloud folks like to cite burst numbers but probably are remembering the times of dual socket single core servers as a point of reference. One company I was at back in 2004 we literally doubled our physical server capacity after a couple of different major software deployments. Same level of traffic, just app got slower with the new code. I remember ordering direct from HP and having stuff shipped overnight (DL360G2 and G3 era). Not many systems, at most maybe we ordered 10 new servers or something.

      Obviously modern servers can push a whole lot in a small (and still reasonably priced) package.

      A lot also like to cite "burst to cloud", but again have to be careful, I expect most transactional applications to have problems with bursting to remote facilities simply due to latency (whether the remote facility is a traditional data center or a cloud provider). You could build your app to be more suited to that but that would probably be quite a bit of extra cost (and ongoing testing), not likely to be worth while for most orgs. Or you could position your data center assets very near to your cloud provider to work around the latency issue.

      Now if your apps are completely self contained, or at least fairly isolated subsystems, then it can probably work fine.

      One company I was at their front end systems were entirely self contained, no external databases of any kind). So scalability was linear. When I left in 2010 (company is long since dead now) costs for cloud were not worth using vs co-location. Though their front end systems at the time consisted of probably at most 3 dozen physical servers(Dell R610 back then, at their peak each server could process 3,000 requests a second in tomcat) spread across 3-4 different geo regions(for reduced latency to customer as well as fail over). Standard front end site deployment was just a single rack at a colo. There was only one backend for data processing that was about 20 half populated racks of older gear.

    2. Gene Cash Silver badge

      Re: Cloud Vs On-Prem

      Don't forget "cloud" started out as Amazon's solution to the burst of activity during Christmas, and most of that sat around the rest of the year, so they started renting it out.

      1. Anonymous Coward
        Anonymous Coward

        Re: Cloud Vs On-Prem

        The story of AWS being formed to rent out spare capacity within Amazon is often quoted but not actually true according to every article I've read about the start of AWS. And it's not very likely either if you think about the degree of separation you'd want in place.

        Try for starters.

        1. dfxdeimos

          Re: Cloud Vs On-Prem

          Yeah, it is apocryphal. Fun story though.

  5. Anonymous Coward
    Anonymous Coward

    As some of us have been saying here for years

    Moving to someone else's cloud is not the answer to anything long term... which is naturally 42. (sic)

    There are plenty of ways to create your own cloud. Plus if you site it right, you can help heat your offices.

    1. Joe W Silver badge

      Re: As some of us have been saying here for years

      A friend has that problem: They use the datacentre to heat one of their office blocks. Unfortunately the cooling seems to rely on the heating being on - especially when they need to get rid of a lot of heat. Which is (as you guessed) in summer....

  6. Gunboat Diplomat

    Hiring impact

    I'm not sure moving away from the public cloud providers is feasible from a hiring perspective for companies that develop software internally even if the financials make sense.

    Most job adverts these days seem to consist of a title like Software/Data Engineer that lists a primary programming language (and may even include it in the title) and then list all the services of a particular cloud as essential to the role. However, this seems to be changing to include the target cloud in the title (e.g. Azure Data Engineer, Software Engineer - AWS).

    With this focus on a particular cloud when hiring, it doesn't make sense to join a company that isn't using the cloud you prefer work in as it could cause you problems when dealing with either AI CV filtering or non-technical recruitment staff when applying for future roles. IT professionals have on the whole always been good at following trends so may not want to work at a company that isn't in their preferred cloud.

    1. Potemkine! Silver badge

      Cyber-darwinism - Adaptation is the key

      Being locked in an environment is the best way to disappear. Ask dinosaurs.

      As always, any rule has its exception (Cobol comes to mind).

      1. Gunboat Diplomat

        Re: Cyber-darwinism - Adaptation is the key

        My argument is that as far as the market goes, on-prem is the new COBOL. Although COBOL still survives and there are plenty of systems out there that still use it, it isn't mainstream any more and certainly isn't a desirable skill for most engineers. On-prem is becoming the same regardless of whether or not it's fair.

      2. Paul Hovnanian Silver badge

        Re: Cyber-darwinism - Adaptation is the key

        "Being locked in an environment is the best way to disappear."

        Perhaps. But there are a lot of things that organizations do which are not part of their 'core competencies' (sorry for the buzzword). For these, it may be better to just purchase off the shelf products/services. But for things that differentiate you from your competitors, you may want to keep those close to home.

        Storage and processing might be better purchased as commodities. But applications more closely tied to your business processes are better held close to home if it is those processes that make you stand out from the crowd.

    2. Paul Crawford Silver badge

      Re: Hiring impact

      If you are doing on-perm solutions you absolutely need skilled staff, and really at least one extra person over base cover to cover illness or folks leaving. That is not cheap or easy to find, but if you are looking at £100k+ per month you have the budget to make it happen, and if your business involves software development and maintenance you probably have many of the skills in-house to begin with.

      For a Mon and Pop shop that has minimal IT needs then cloud / hosted services makes more sense for exactly the same reason - the looking after hardware, etc, costs are way too high and they don't have/need that sort of staff anyway.

      1. Anonymous Coward
        Anonymous Coward

        Re: Hiring impact

        I work for a company having some government ties and doing lots of governement work. We are based in Europe, and going to the cloud is quite obviously no option for us (CLOUD act, for example). So on prem it is. Basically we are now a cloud provider for that line of work, or aspiring to be. This means, we have to have a bunch of bright lads and lasses to wrangle the hard- and software. We also have the need for people doing all the cloudy stuff. And of course we are troubled by e.g. Microsoft not really wanting their customers victims to have all of the nice toys on prem.

      2. Nate Amsden

        Re: Hiring impact

        I think you are incorrectly confusing SaaS and IaaS in your statement.

        Mom and Pop shops that have minimal IT needs likely will have almost zero IaaS, because they can't manage it. IaaS (done right) IMO requires more expertise then on prem, unless you have a fully managed IaaS provider. But the major players don't really help you with recovery in the event of failure, it's on the customer to figure that out. vs on prem with vmware for example if a server fails the VMs move to another server, if the storage has a controller failure or a disk failure there is automatic redundancy. Doesn't protect against all situations of course but far more than public cloud does out of the box. If Mom & Pop shop just have a single server with no redundant storage etc, if that server has a failure, they can get it repaired/replaced generally with minimal to no data loss. Vs server failure in the major clouds is generally viewed as normal operations and the recovery process is more complex.

        I've been calling this model "built to fail" since 2011, meaning you have to build your apps to handle failure better than they otherwise would need to be. Or at least be better prepared to recover from failure even if the apps can't do it automatically.

        SaaS is a totally different story, where the expertise of course is only required in the software being used, not any of the infrastructure that runs it. Hosted email, Office, Salesforce, etc etc..

        On prem certainly needs skilled staff to run things, but doing IaaS public cloud(as offered by the major player's standard offerings) right requires even more expertise(and more $$), as you can't leverage native fail over abilities of modern(as in past 20 years) IT infrastructure, nor can you rely on being able to get a broken physical server repaired(in a public cloud).

        1. Code For Broke

          Re: Hiring impact

          "SaaS is a totally different story, where the expertise of course is only required in the software being used, not any of the infrastructure that runs it. Hosted email, Office, Salesforce, etc etc.."

          I can assure you that for Oracle Fusion Cloud (SaaS) customes, you still very much need to keep several infrastructure boffins in-house. the Oracle Cloud SaaS "experience" is fraught with constant issues that can only be traced to the horrific Oracle infrastructure and Oracle will not humor your SRs until you basically have your own Infrastructure team isolate the exact issue for them.

          1. Nate Amsden

            Re: Hiring impact

            That is very interesting, and unfortunate for the customers. Sounds like that is not a real SaaS stack? Perhaps some hacked together stuff operated as a managed service?

            I would not expect in a SaaS environment a customer would even be able to look at the underlying infrastructure metrics or availability it's just not exposed to them. I know I got frustrated using IaaS years ago, because not enough infrastructure data was available to me.

          2. Anonymous Coward
            Anonymous Coward

            Re: Hiring impact

            I am confused, because apparently it was not me who wrote this comment...

            Just had an SR closed on me because they insisted that the several hour long outage where the Fusion environment couldn't reach our server had to have been the fault of our server, despite the fact that other systems could talk to it just fine, and only Fusion had the problem.

            In my experience, even after you isolate the issue for them, it's still very much a toss-up as to whether they take your SR seriously and you actually get any help. For anything less than a critical production issue you need to be prepared for a 4-6 week delay before you get to the part of the process where they actually understand the issue and work on fixing it.

            Mind you that's not an issue with their SaaS offerings specifically, that's just Oracle Support for you in general.

            We also had a longstanding (years long) issue where their SaaS webservices would just randomly fail every now and then (the same request resubmitted a minute or two later would succeed). They refused to investigate unless we could replicate the issue on demand, which of course we couldn't. This issue suddenly stopped happening several months ago and our error log got a lot quieter.

  7. tetsuoii

    A company I work with paid about $300 a month just to store around 4TB of data, and downloading that data (for local backup) cost nearly $4,000 IIRC. Whatever the exact numbers, the cost was eyewatering. People just don't realize they're being royally scammed.

    1. Paul Crawford Silver badge

      Cloud storage is only good value for small amounts.

      Big amounts cost, but equally big storage with genuine site-resilience is not cheap either as you have capital costs (buying storage stuff, budgeting for its inevitable replacement and thus data migration) and running costs (electricity mostly for big stuff and keeping the damn kit cool, staff costs, location/data centre property costs). Any sensible business should look at both options and really think through the pros and cons of either choice, not just the marketing hype from cloud vendors (or indeed storage vendors).

    2. runt row raggy

      the s3 pricing calculator says 93 bucks a month. a download at the most expensive internet rate is 370 bucks. what service did you use that is 3x to 10x more expensive?

  8. chivo243 Silver badge

    managed hosting? Kicking the can to the other side of the road?

  9. steviebuk Silver badge

    Told you....

    ...that the "infrastructure free" idea was bollocks & would be more expensive.

    Always fun when you're right, not fun when not listened to.

  10. MikeLivingstone

    And our idiot Government want to join the cloud journey

    Yes, cloud is now revealed as a giant ripoff.

    So of course the UK Government now has idiot departmental CTOs parroting that they are moving to the cloud without knowing what that really means.

    1. Joe W Silver badge

      Re: And our idiot Government want to join the cloud journey

      Especially looking at the laws under which the parent companies of the cloud providers (or the providers themselves) operate under.

      Can#t wait for Schrems (3? or are we at 4 already), can't wait for somebody doing the bloody legal battle for what is (and is not) allowed for PI data and gub'mint work and schools and stuff.

  11. spireite Silver badge

    Said for years - don't do it...

    Once you're in it is difficult to get out.

    Time and time again, I run into the mind-concept that

    1. Cloud is cheaper

    2. Cloud looks after itself

    3. You don't need to employ techies to admin it

    In my 10 years being exposed to it, the above three have proven false every time.

    In fact it gets worse. The amount of platforms that are now 'in the cloud' - e.g. looking at Azure native ones is mind boggling now.

    In the early days, you possibly could get away with one man supporting it.

    Now, there is no chance.

    I'd argue that you need more techies to support it, than you would on-prem.

  12. tiggity Silver badge

    Cloud pricing can be nasty

    Have previously worked at smaller companies (so not enough clout to negotiate special deals, stuck with the "usual" commercial offerings). An awful lot of effort (some automated, some manual) spent monitoring & when necessary tweaking cloud configuration or applications / services running in the cloud to keep costs down.

    If you don't keep a close eye on things then costs can easily skyrocket if you mistakenly spend too long in the next (or higher!) "tier" compared to your contracted "tier" and will then get hit by prices relating to higher tiers usage (generally the cloud providers give you a bit of leeway if you exceed agreed usage be it in compute, storage etc. as obviously usage spikes can occur but you will find "sustained" exceeding of your tier limits causes grief... and "sustained" is often defined in an interesting way!)

    Obv if you are already running at the highest cost tiers, then this not an issue, but it can be an expensive gotcha for smaller outfits.

    Though are upsides for smaller company in using other peoples hardware & contractually guaranteed* uptime / availability etc .

    * though not always as guaranteed & you may be surprised how feeble the compensation is when not achieved.

  13. elsergiovolador Silver badge

    Open source

    or Open Exploitation?

    I wonder what % of that money the developers of the software the cloud is using to make billions, the developers of these open source projects have seen.

    Open Source is one of the biggest scams in the industry aimed at getting free labour and not paying taxes on it.

    1. Lee D Silver badge

      Re: Open source

      Nobody forced people to write that software.

      They wrote it because they wanted it, they contributed it to to others knowingly and consciously, and they were fully aware (more than anyone else ) of the licensing arrangements to which they submitted it.

      Open Source programmers don't work on programs because they're exploited. They work on them because they want a system that's outside corporate control, works, is free, or even just exists.

      Nobody is sitting there prodding OS programmers into coding teams and forcing them to slave away in cubicles, with a list of criteria dictated by big business because they want to exploit it. The programmers made something. They gave it to the world. The world - including large corporates - decided to use it.

      And, yes, I'm an open-source programmer (I refuse the capital letters, and links to GNU, FSF etc. personally as I disagree with the way they do things). Most of my stuff is MIT-licenced. If a piece of it was picked up tomorrow and made someone a billion dollars... good luck to them. Sure, it'd be annoying but at the same time: I never did it for the money, or any expectation whatsoever.

      If you don't want companies to pick up your code and use it to make money - licence it appropriately.

      1. elsergiovolador Silver badge

        Re: Open source

        They wrote it because they wanted it, they contributed it to to others knowingly and consciously, and they were fully aware (more than anyone else ) of the licensing arrangements to which they submitted it.

        It's the same as saying nobody forced people to gamble or do drugs. Just that people do these things out of their own volition, does not necessarily mean they have not been manipulated into doing something against their own interest.

        Open Source programmers don't work on programs because they're exploited. They work on them because they want a system that's outside corporate control, works, is free, or even just exists.

        That's what corporations want open source programmers to think. The reality is that those programmers provide free R&D for these corporations, so it is in their interest to make developers think this way.

        If you don't want companies to pick up your code and use it to make money - licence it appropriately.

        As in if you don't want to gamble or do drugs, then just don't... That's quite ignorant take.

        There is a lot of peer pressure to get people into open source, because of the big corporations propaganda. You know if you don't have OS contributions in your CV then you are a less worthy hire and so on.

        And let's not get into the social aspect of this where mostly people from privileged background work on those Open Source projects.

        1. spireite Silver badge

          Re: Open source

          I always look at open-source as being philanthropic..

        2. Lee D Silver badge

          Re: Open source

          If you are peer-pressured into slaving away on a open-source software project for any return - whether reputational, monetary, or otherwise - then you are particularly mentally vulnerable AND absolutely in the wrong part of software development.

          "You know if you don't have OS contributions in your CV then you are a less worthy hire and so on."

          Nonsense. Don't work for those companies, ever.

          "And let's not get into the social aspect of this where mostly people from privileged background work on those Open Source projects."

          Puh-lease... I was hacking on Slackware 3.9 and kernel 2.0.38 for years as part of floppy-based distro Freesco - precisely because I couldn't afford a real router or a hard drive, after I had spent most of my youth programming and giving away the results of that programming to friends, family and the Internet at large (my state-secondary in a deprived area that I grew up in LITERALLY hosted an assembly to show off that I'd created software, my brother had used his college account to upload it to a Usenet newsgroup (we didn't have internet!) and then I (my brother) later got an email from a woman in Canada saying how she loved it and thanking me for it... part of the assembly was literally the "Wow, look at this amazing new tech, a MESSAGE FROM CANADA sent over THE COMPUTER!").

        3. Code For Broke

          Re: Open source

          As someone who regularly gambles and does drugs, I seriously resent this pernicious association with open source developers.

    2. Nate Amsden

      Re: Open source

      You got a bunch of down votes but you are right for the most part. A lot of the early open source models was release the source for free and then have a business around supporting it. Not everyone would sign up as customers but the good will from releasing the source would attract users. It worked well for several companies, and of course public cloud is taking that away from a lot of these orgs, which is unfortunate. And as El Reg has reported several such companies have been very vocal about this situation.

      Obviously in many(maybe all?) cases the license permits this usage(at the time anyway, some have introduced licensing tweaks since to prevent it), but I'm quite sure if you went back in time ~15ish years and asked the people making the products did they anticipate this happening they would probably say no in almost all cases(perhaps they would of adjusted their licenses if they viewed that possibility as a credible threat). At the end of the day the big difference with these cloud companies and earlier generations of "mom & pop" ISPs that were using Apache or whatever to host their sites, is just massive scale.

      Those licensing their code in BSD licensing or similarly completely open licensing probably wouldn't/shouldn't care anyway.

      Similarly for the GPL, a trigger of sorts to making the GPLv3 was the TiVo "exploiting" a loophole in the GPLv2. So GPLv3 was made to close that hole(and perhaps others). There's even a Tivioization term they made

      "In 2006, the Free Software Foundation (FSF) decided to combat TiVo's technical system of blocking users from running modified software. The FSF subsequently developed a new version of the GNU General Public License (Version 3) which was designed to include language which prohibited this activity."

  14. Lee D Silver badge

    So paying a for-profit company to run your IT costs more than running your IT yourself? Amazing. Whodathunk?

    1. Anonymous Coward
      Anonymous Coward

      OPEX vs CAPEX, innit?

      Most of this stuff is hiding monies from beancounters.

  15. Grunchy Silver badge

    Hella good mp3 collection

    8 petabytes is what, 8000 terabytes?

    What could that be, either a photo collection or a mp3 collection or what, a bluray movie collection?

    If it were ebooks I don’t think you’re going to read them all in 1 lifetime, is all I’m suggesting.

    (“Somewhere in my 8 petabyte data hoard is a single fact worth $1 million. How many centuries will it take to find it?”)

    1. Anonymous Coward
      Anonymous Coward

      Re: Hella good mp3 collection

      8k pron ?

    2. Code For Broke

      Re: Hella good mp3 collection

      8192 terabytes.

      1. Crypto Monad Silver badge

        Re: Hella good mp3 collection

        Not these days:

        * 8 Pebibytes = 8192 Tebibytes = 9,007,199,254,740,992 bytes

        * 8 Petabytes = 8000 Terabytes = 8,000,000,000,000,000 bytes

        I know it sticks in the throat (especially when talking about RAM), but that's where we are. Blame the hard drive manufacturers, who wanted to sell drives with 500,000,000,000 bytes as "500GB" and not "465GB".

        1. Code For Broke

          Re: Hella good mp3 collection

          I upvote you and tip my hat. Thank you for the gracious correction.

  16. werdsmith Silver badge

    We went to cloud, major migration project, closed down the on prem data centre replaced with a comms cabinet.

    Then when the real costs happened another major migration project, moved everything prod to co-lo. Kept cloud for dev/test environment, which is switched off for no cost at night and weekend.

    Savings compared to the old cloud infrastructure are significant.

    1. Anonymous Coward
      Anonymous Coward

      Where a mate works they moved to Office 365/One Drive supposedly to allow easier working from home etc. in reality most people understood it was about saving money and nothing else. Well one day the working day starts and there’s no Internet connection at the head office, it stops working totally. Urgent calls to their telecoms/internet provider indicates problem is that a fibre line was ripped in half by a JCB during road works. My mate asks if he can go home for the day as it won’t be fixed by close of play. He points out that they moved to the cloud to allow Working From Home. He’s told no he can’t, why can’t he get on with something not using the Internet instead? He pointed out at some length that he couldn’t do a fat lot as everything was in the cloud. He said he could compose emails in wordpad using his phone to read the new ones in the inbox. Bosses are not impressed. Eventuallya 4G modem/router was rigged on each floor around lunchtime. That allowed work to continue but he said that still wasn’t ideal by any means.

  17. Alex_A

    Wishful thinking

    They are not even running their own DB but using managed RDS and Elastic Search and they are thinking about going bare metal?

    Boy, are they in for a surprise...

  18. mynciboi

    Are we seriously

    I work at a major vendor, won't say which but I am also doing cloud Certs etc. - basically hedging my bets :)

    The first thing is that it is nearly impossible to control spend in cloud. Almost all services which are reliant on each other cost money but even if they say what they are at point of ordering, it can be hard to quantify, hard to design and you're devs and admins need to be super aware of this to minimise things like cloudfront, inter-AZ, and inter-region. In fact many of the most useful features involve traversing the AZs and regions to get the benefit of being there in the first place. Imagine what the Data Transfer line item looks like on the example above's bill! Also, why would you choose to pay for inter-site traffic, that's not a metric a lot of people in my experience measure, so they don't even consider it.

    Secondly is the bespoke stuff - AWS / Azure etc. can and do remove services regularly. This means, you could have a business service reliant on a service or function which won't be there any longer. We all know how easy it is to get the buy-in, start a project to rewrite an application and get that completed, tested and deployed before the timer runs out, right? No overruns on that project.

    Thirdly, the training for the AWS solution architect cert, talks about almost all of the applications as scaling, growing app types which are suited to running in Cloud. There is no talk about the average run of the mill business application which works just as well (or better, nearer to the users) off cloud than it does in public.

    I could go on but for now, lastly, is the resilience. None of the supposed resilience means anything when this exists in their terms and condistions:

    Microsoft: "The following responsibilities are always retained by you: Data, Endpoints"

    Google: "Warranty - We provide our services using reasonable skill and care. We don't make any other commitments about our services ... (including the content in the services)."

    AWS: “Because AWS customers retain ownership and control over their content within the AWS environment, they also retain responsibilities relating to the security of that content". “customers retain control of what security they choose to implement to protect their own content, applications, systems and networks – no differently than they would for applications in an on-site data center.”

    To summarise, any part of any cloud service could fail and they will not take responsibility, your backups, site links, a whole AZ, core service, links between their own services controlling which parts of your app are active etc. At least if you design the service in an On-Prem or hosted DC, you design the service you need, create the resilience you need and don't pay through the nose for a load of stuff you don't really need.

    1. Steve Davies 3 Silver badge

      Re: Are we seriously

      Cloud Salesmen (they are usually men) are just modern day Snake Oil Salesmen.

      i.e. Not to be trusted one micron. If they can't guarantee the savings then walk away.

      Don't forget about the costs you will incur to get your data out of their cloud. The vendors will want to charge you three arms and four legs for it.

  19. tracker1

    I'd go split usage strategy

    I'd definitely bring computer, db and search in house. And would replace S3 with Cloudflare R2, for the usage that's public facing.

    Search services seems to be the most singularly overpriced SaaS out there compared to self hosting elastic or similar.

    I would probably replace the Dynamo with Cockroach which will increase usage options and slightly easier to manage than Cassandra.

    Depending on the total usage a couple racks at 2 hosted facilities should be a cost savings and likely better performing.

    It's funny, VPS makes the most sense at small scale... Cloud at mid so you can save on personnel, but hitting the point where self host in a rented days center eventually takes hold.

  20. RobertWaters

    Deja vu

    This is just a copypasta of the 1990's "client server is cheaper than mainframe" religious wars. You can't compare apples with orangutans.

    1. Anonymous Coward
      Anonymous Coward

      Re: Deja vu


  21. Anonymous Coward
    Anonymous Coward

    Different Outcomes

    At the end of the day, the cloud providers provide services and make a profit doing so. Then factor in that the cloud engineers implementing that service need to do it at a scale of operation that is so insanely bigger than any operation that you would need to implement on premise, that they really do need to factor in code to handle all of the 1 in 1,000,000 chance things, because they are actually occurring about every 5 minutes.

    Thats why, for a medium or large sized organisation, you can almost certainly do it cheaper in house, and deliver a solution that solves the problems that you are actually facing.

    However, there is the management factor to take into account. Looking after infrastructure takes management, and senior managers are responsible for the outcomes. As a senior manager, its much more comfortable to be able to say that fixing the issue is a hardware suppliers issue, and we have engaged their support team to get the issue resolved, rather than my hardware team is on it and heres the latest 2 hourly update.

    For smaller businesses, the ability to scale up without burning through your capital expense budget means that for most workloads, cloud make sense.

  22. sketharaman

    Only $3.2M?

    The difference between cloud and onprem is of course significant but, in itself, an annual spend of $3.2M on cloud infra seems to be chump change for a company of this size? At least, it doesn't sound very "obscene".

  23. Anonymous Coward
    Anonymous Coward

    Fun with Economics

    In sum, CTO whines "I'm paying $9.6M/3Y for AWS! Waaaa! .... Ridiculous when I can buy 3 servers with 15TB for $46,332 (on sale)!"

    Since it's hard to know what exactly his strategy is here, such as it might be here, let's assume his plan is to implement 8PB of replicated data using these servers and they're something other than RAID0... that's $9.6M for an AWS "data center" vs {math} USD$50M just for server hardware with 100% of storage used. Oh? You wanted to use software + network on those servers with electricity, space to store them, etc? Ahem. Dell doesn't sell of that, not all of it, but they will take a stab at IT-ing for you at a hefty contract rate.

    Errr... seems like another CTO who doesn't understand the simplest aspects of his business, can't bother to understand the details of how things work or all of the above.

    1. roynu

      Re: Fun with Economics

      Please look to the Dropbox hybrid project for large scale storage cost comparison. Spoiler: Their Data center and storage hardware investments were completely offset by S3 savings in the first year.

  24. spuck

    If this is the first time the CTO has looked at his AWS bill...

    He ought to be fired. The _CTO_ is just now discovering, 12 months after the fact, what his cloud spend is?

    Thought #2: This comparison of "I spent $3MM+ on cloud hosting, but just discovered I can buy a Dell server for less than $50K" is ridiculous. I get it; it's a tweet so we shouldn't expect all the hairy details in 140 characters.

  25. dfxdeimos

    Lots of other costs

    There sure are a lot of other costs that I think will end up making this a lot less attractive. Facilities, power, cooling, and expertise in creating fault-tolerant architectures (then running those systems) is not free. This seems like a knee-jerk reaction to "BILL BIG! CFO SAD".

    Good luck to them and hopefully they are able to find the right balance. I will be anticipating next year's article, "Basecamp details 'obscene' $3.3 million bill that caused it to return to the cloud."

  26. Anonymous Coward
    Anonymous Coward

    People get very emotional about this topic on here but the reality in terms of what the is the best model falls squarely in the "it depends" category. Some workloads will stay on premises for a long time, maybe forever. Some workloads will almost always be deployed in the cloud moving forward. Cloud costs can run out of hand if you let them. On premises environments have a stop-loss in place in terms of cost but also a limit in terms of scale and flexibility. Models like HPE Greenlake might go some way to bridging the gap but they often take months to agree the contract (which will be 3 years minimum), have a significant minimum commit (80%) which isn't great when you're trying to demonstrate the value of a new workload, and they have relatively small buffers which limits the ability to scale rapidly. And with supply chains as they are now, expanding the buffer could take months. Great for a stable workload, not great for experimentation when you want to start small. But for stable workloads on premises is most likely the cheaper option.

  27. Anonymous Coward
    Anonymous Coward

    Basecamp went over budget

    Maybe they should have used project management.

    1. Anonymous Coward
      Anonymous Coward

      Re: Basecamp went over budget

      Yes, why just go over budget when you can *really* outspend yourself and enable some overpaid PM consultants to buy new cars besides.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like