I can see the cloud/on-prem thing become a cycle like outsourcing/insourcing has become - each time the CIO changes the new one works out they can "save" the company £ks by changing to the one they currently aren't doing; by the time the numbers come home to roost they'll have left for another position elsewhere, and the merry-go-round continues. Each time the change is made this inevitably incurs costs, so nobody at the company really ever knows which might be cheaper.
Bill shock? The red ink of web services doesn’t come out of the blue
Simple sums can pack a punch. When the CTO of 37Signals got his $3 million cloud bill for 2022, and after the red mist had cleared, he sharpened his pencil to see if that was kosher. You can see more details here, but to give one part of the breakdown, he calculated that the AWS monthly compute component of the giant invoice …
COMMENTS
-
Monday 23rd January 2023 11:22 GMT Ken G
I've done the numbers and it looks like a rollercoaster when you plot the cloud cost vs the infrastructure costs, there isn't one stable point.
If you're a greenfield startup then cloud is best, you can fail fast or scale infinitely so long as your revenue exceeds your costs. Once your load stabilises, you can probably do better on-prem if your workload is portable (because of course you planned your cloud exit strategy before committing to a vendor) but you might not want to take on the risk.
For a large existing business, usually you already have some datacentres and IT staff to let you estimate how much physical vs cloud will cost and you can have the accountants battle the OPEX/CAPEX decision in the boardroom.
My gut feel is that physical is cheaper if you already have a large enough IT staff and good retention but I wouldn't recomment anyone start there. Risk is the other variable. If your data centre goes down, it's all on you. If AWS or Azure goes down then it's for them to fix. You probably won't have as many failures but each will last longer and be more damaging. Security is also something I think is better in the cloud, not inherently but because you have to follow good design practices and plan what you're doing and because your platforms will be kept current.
-
Monday 23rd January 2023 17:08 GMT Mr.Nobody
The secon part of that last sentence is awfully critical
"Security is also something I think is better in the cloud, not inherently but because you have to follow good design practices and plan what you're doing and because your platforms will be kept current."
I have seen loads of setups where this is not even remotely the case, especially ones that started in the cloud.
-
Monday 23rd January 2023 17:50 GMT Nate Amsden
The last company I was at was a greenfield cloud thing. They had no app stacks, everything was brand new. Their existing technology was outsourced and that company did everything from software dev to hosting and support etc. At one point before I started the company felt they had outgrown that outsourced provider and wanted their own tech team to build their own app stack. So they hired a CTO and he built a team, and they started building the new software stack.
He hired a former manager of mine whom hired me at the previous company, I worked with him only a couple of months but that was enough I guess. That previous company was hosted in Amazon cloud(also greenfield). This manager saw the pitfalls of that and wanted me at the new company mainly to move them OUT of the cloud (they had yet to actually launch for production).
They launched production in Sept 2011(I joined May 2011), after doing many weeks of their best efforts at performance/scale testing(I was not involved in any of that part). All of those results were thrown in the trash after a couple of weeks and the knobs got turned to 11 to keep up with the massive traffic. Costs skyrocketed as well, as did annoying problems with the cloud. We started ordering equipment for our small colo (2 racks, each roughly half populated initially) in early Nov 2011, and installed it in mid Dec 2011, and then moved out of Amazon to those two racks in early Feb 2012(I was a bit worried as there was a manufacturing flaw in our 10Gig Qlogic NICs that was yet to be solved, ended up not causing any impacting issues though). I repeated a similar process for their EU business which had to be hosted in the Netherlands, I moved them out in July 2012 to an even smaller infrastructure, probably about half a rack at the time. In both cases, equipment was at a proper co-location, not in a server room at an office.
The project was pitched by my manager as having a 7-8 month ROI, the CTO got on board. It wasn't easy convincing the board but they went with it. Project was a huge success. I dug up the email the CTO sent back in 2012, and sent it to the company chat on the 10th anniversary last year. He said in part "[..] In day 1, it reduced the slowest (3+ sec) Commerce requests by 30%. In addition, it reduces costs by 50% and will pay for itself within the year."
I believe we saved in excess of $12M in that decade of not being hosted in cloud(especially considering the growth during those years). Meanwhile had better performance, scalability, reliability, and security. Last/Only data center failure I've experienced was in 2006 or 2007, Fisher Plaza in Seattle. I moved the company I was at out of there quite quick after that (they were already there went I started). Remember that cloud data centers are built to fail(a term I started using in 2011), meaning they are lower tier facilities which is cheaper for them, and is a fine model at high scale, you have to have more resilient apps or be better prepared for failure vs typical enterprise on prem situation.
So count me as someone who disagrees, greenfield cloud is rarely the best option.
-
Monday 23rd January 2023 19:02 GMT AVee
There are middle grounds, you can still rent physical machines for example. In that case hardware and infrastructure is not your problem. A lot also depends on what you are building, not all software is equal.
Personally I've decided that whatever we deploy needs to run on plain Debian. For now on cloud virtual machines (there's still room to scale those up). But it could easily be moved to either rented or owned physical machines if needed. I don't need a huge amount of nines, if disaster recovery takes an hour or two that's fine. I also don't have a big variation in load, so there is no gain from dynamic scaling. If you do need high availability, and/or you have peak loads over short periods you might be better of with a cloud solution. But my guess would be that a lot of people overestimate the benefits of cloud solutions (and underestimate the overhead they bring).
-
Monday 23rd January 2023 21:11 GMT Nate Amsden
It's amazing to me how some folks try to justify cloud. Not long ago I saw a post saying they suggest cloud because otherwise you need several 24/7 people to take care of your facility. Even things like roof repairs and stuff. I reminded them colo has been a thing since before cloud and solves that aspect fine. All big and probably most small cloud providers leverage colo in at least some of their markets. I've been in colo for 20 years across 5 companies.
Equally amazing I remember back in 2010 I had a Terremark cloud rep try to challenge me in managing my own gear. I told them my solution cost about $800k at the time. Their solution was either $272k/mo OR about $120k/mo with a $3 million install fee. They didn't think I'd be able to manage $800k of gear. It was 2 racks of equipment. So easy. But the sales guy was confused that I could do it and not need to outsource to them or another provider.
-
Wednesday 25th January 2023 02:22 GMT Anonymous Coward
> [cloud rep] didn't think I'd be able to manage $800k of gear.
Maybe because [cloud rep] had never actually seen gear. "It's cloudy all the way down".
That's also, in part, why some big wheel execs fall for the cloud hype so readily, without considering whether it actually makes sense for the task at hand (it might). That is, some of them have no frame of reference for what a given pile of gear can do, nor what said pile's equivalent in the cloud might cost on a periodic basis.
If they come from an MBA/finance sort of background rather than engineering, they might never have really seen gear either. They're more likely to be primarily interested in shifting the dollars from capex to opex, regardless of whether the solution is fit for purpose.
-
-
-
-
Tuesday 24th January 2023 18:04 GMT doublelayer
The same is true of people who don't pay sufficient attention to backups. Nothing is immune to something catastrophically collapsing, and it is the job of the administrators to have a method of recovering from a situation like that. Whether that's using multiple datacenters in a cloud array or having physical servers in multiple places, it involves extra expense for the benefit of resiliency. There are many actions OVH customers could have taken which would have survived the loss of the datacenter, many of which wouldn't even require downtime. OVH's example demonstrates that the cloud goes wrong, but not that on prem is better.
-
-
-
Monday 23rd January 2023 15:04 GMT cjcox
While it varies... primary goal is reducing expensive headcount costs.
While it varies... primary goal is reducing expensive headcount costs. I think any company going into "cloud" thinking otherwise is probably not thinking very well. I'm always amazed at the number of people that struggle doing very basis systems engineering and operations though. If that's you, then cloud may also help because, well, you stink at doing the basics. Also, as I've found when we run "our own shop" we get an extra "9" vs. what we get out of cloud... if that's important. Is running your own IT harder? Yes and no. Why? You ultimately have more flexibility and predictability (if you know the basics) if you run your own store. Cloud can also be problematic in that to really get the most out of it, you do have to "choose" your cloud provider. Otherwise, things get so generic that you may not cost justify the move nearly as much. Again, YMMV.
-
Monday 23rd January 2023 15:15 GMT Version 1.0
Modern workers rarely think about the pre-70's world, it's worth remembering that back then almost everyone created open-source software code to help other users get things working. And everyone could read the source and update their own work without having to panic about what might happen. CP/M and MSDOS were created and while they made a few people get a decent payroll, neither operating system generated millions ... OK, yes they did have millions of users and almost everyone (not just the users) were happy.
But now Twitter is worth 41 billion dollars - where did all that money come from, and where is it going? Is everyone happy or just in debt?
-
Monday 23rd January 2023 20:30 GMT doublelayer
A lot of things were different in the 1960s, not universally for the better and most of which is not coming back.
"it's worth remembering that back then almost everyone created open-source software code to help other users get things working. And everyone could read the source and update their own work without having to panic about what might happen."
That's a bit charitable. Since people were writing in assembly quite frequently, the programs people received could be more easily edited. You can theoretically modify the binaries we receive today, but it's harder because they're compiled from more complex languages and tend to be much bigger. A lot of software wasn't open source just in the sense that it wasn't published for anyone's use, but kept restricted to the organization that wrote it or to people who bought it. Those people would have access to the code, but it wasn't like modern open source is where any person who wants can get a copy for free in a few minutes and with clearly defined rights to copy, modify, and distribute.
"CP/M and MSDOS were created and while they made a few people get a decent payroll, neither operating system generated millions"
Millions of what? Of dollars or pounds, yes they did. Digital Research had revenue of $45M in 1983. We all know how profitable Microsoft was. Some of that came from other software like compilers for DR or productivity software from MS, but a lot of that relied on the operating systems that software ran on. Millions of computers? I'm not sure, but I think DOS did have millions of installs eventually. CP/M came along too early to get mass adoption because computers weren't mainstream in the 1974-1983 period.
"But now Twitter is worth 41 billion dollars"
No, somebody paid $44B for it. That doesn't mean it was worth that much, and now that that person has spent a few months smashing it, it's worth less than it used to be.
"where did all that money come from,"
From the people who thought that Musk-owned companies are a lot better than anyone else making similar products and thus valued them very highly, from cryptocurrency speculation, and banks who are now regretting that they put up cash when a billionaire decided to make an impulse buy. But if you're meaning companies other than Twitter, such as modern operating systems, it came from the fact that billions of people are using computers today who had never thought of doing so in the 1980s.
"and where is it going? Is everyone happy or just in debt?"
Everyone isn't happy, but I'm not sure that's ever an option. If we're specifically speaking of Twitter, then some people don't use it (myself included), so my happiness level hasn't changed as it's been damaged. If we're talking large corporations in general, we had large corporations before; they were just different names. It causes problems now for the same reason that it did before, and we'll have to deal with that, but it's not always on the top of my list of problems I need to solve.
-
Tuesday 24th January 2023 09:50 GMT Paul Cooper
In the early 1980s I remember attending a User group meeting at a well-known University in the Fens. At this meeting, one of the issues was that one of the colleges wanted to install a bought-in accounting package on the mainframe - the mainframe (an IBM370 with a custom front end to the OS at that time) was pretty much the only game in town. A representative of the company gave a presentation. But I remember (I was part of it!) that the overwhelming attitude of those present was "Why on earth buy software when you can write it yourself?" As I'd just come from an industrial company, where I'd been tasked with writing an accounts package (it never happened; I left before it got serious, but not for that reason), the whole attitude at the time was "You want the computer to do XYZ? OK, start coding!". And I wrote a suite of data handling software to handle geographic data, and thought nothing of it - it was all in a day's work.
-
-
-
Tuesday 24th January 2023 06:50 GMT deadlockvictim
Cost of Personnel
El Reg» he calculated that the AWS monthly compute component of the giant invoice was £63k a month. Buying equivalent hardware from Dell worked out at just $1.3k a month.
This is just hardware though and the costs of the sysadmins, DBAs, site-reliability engineers were not added in.
To take a convenient yearly salary of $96K, that translates to $8K per month, if the company needed 8 more people (on that salary) to maintain on-prem servers, they would be at roughly the same figure as the AWS bill.
This also has to be taken into consideration. I haven't even mentioned electricity, spare parts, replacement drives, training courses and other additional costs that go into maintaining on-premises servers.
Personally, I prefer the on-premises model on the grounds that more people have more varied jobs and companies have more control over their data, I just felt this needed pointing out.
-
Tuesday 24th January 2023 08:23 GMT Kauppe
> 37Signals insists; a large team spent a lot of time doing the smartest deals they could.
I hope this large team was doing other work as well... as they aren't cost-free are they.
Often these problems with cloud cone down to "lifting and shifting" an on-prem (or colo) solution directly into apparently corresponding/equivalent AWS services. But that's not taking advantage of the modern cloud (although it's all you could do 10 to 15 years ago) in all its automated, serverless (sic) glory. For me a bit of a warning sign is when someone's cloud revolves around some unwieldy k8s setup for example.
Cloud isn't going to be for everyone I suppose, but when it's done right it's amazing value for money. You've got to have quite a different mindset about it. Not "what if we put this server on EC2, can we save some £?", but rather, "what is the business need that this server meets, is the process what we need today, and can we model a better process around microservices?"
A great advantage of cloud-done-well is that you can reshape your systems around changing business needs (including unexpected ones) much faster than investing in tin and buildings. That is, so long as you didn't just build a copy of your racks in k8s and stick that on AWS.
To take a slight issue with this article's main analogy, you don't really need to be a brilliant forecaster, because taking advantage of cloud means you can (or should be able to) respond to changing weather very rapidly.
-
Tuesday 24th January 2023 09:50 GMT Anonymous Coward
Even buying kit for your own datacentre isn't straightforward these days. Vendors want to take you down the aaS route which means waiting months for a quote, then debating T's and C's for months, signing up to a huge minimum commit up front, having to take their highest levels of maintenance and agreeing to do all that for years. You can see why new companies don't tend to be "born in the datacentre". The desire from some company bean counters to push things opex at all costs has a lot to answer for.
-
Tuesday 24th January 2023 10:44 GMT ColinPa
Depends how you cost it..
I got involved in customer which had two vendors offering solutions to a customer. The competition said of our company, you'll need this huge box, and all this (expensive) software from the opposition including power etc it will cost.. . Our software only needs a small box and look at the price. We replied - most of our software they listed is not needed for the what the customer asked for, and you only need a box a quarter of the size to meet the requirements, and have they included power and aircon in their bid ? (no they hadn't)
We were each asked to demo their scenario on our hardware.
Our lead person had meetings with the customer to understand the requirements.
"1000 transactions a second" turned out to be "1000 business transactions a second". Each business transaction is composed of 10 transactions.
A database of this size, ahh we had not realised we wanted to keep history as well. So the database could be 10 times the size.
In the end the customer went with us because we helped them understand their requirements (we wanted the best solution for them - preferably our solution) , and with the opposition you had to purchase an unknown number of "optional" things, such as support, and tiered costs.
-
Tuesday 24th January 2023 10:50 GMT gitignore
Dynamic load
Where cloud excels is if you have a dynamic or cyclic load - so you can spin up 500 nodes to process a sudden influx of workload, and then throw all but two of them away for the remainder of the day/month/year . Think the Wimbledon or Superbowl web servers for example. Where it sucks (financially, at least) is when you have a constant baseload of work, where you end up paying a flexibility premium for a service that is running at constant load 24/7 .
-
Tuesday 24th January 2023 12:02 GMT Anonymous Coward
Re: Dynamic load
Agreed. Cloud really comes into its' own where you have a new workload / are experimenting with a new offering and don't have a handle on the sizing upfront (or even certainty that it will become permanent) or when demand is spiky / uneven. I know all the traditional vendor aaS models offer a buffer, but it's typically sized to accommodate a bit of gradual growth rather than something more lumpy. If something lumpy does consume the buffer then you're left running close to capacity until they can replenish the buffer (good luck with that). And if it's something new then the big upfront commit with aaS is a big blocker.
-
-
Tuesday 24th January 2023 17:06 GMT Anonymous Coward
This is why you have firms like the Duckbill Group. You really need to know what you are doing to avoid unnecessary costs. Also the more cloud native you are the more cost efficient you are. Companies that do lift and shift and never rearchitect will always waste more money in the long run.
Not knowing the details here they should engage their account SAs to help improve the efficiency of their cloud spend.
-
Tuesday 24th January 2023 17:12 GMT Jadith
Like just about everything else in IT...
YMMV
Honestly, I suspect most businesses will end up in hybrid models, having a bit of both. The diificulty is really not figuring out which one is best for the business, rather what works best where.
Best example is test envitonments. These often do not need to be up 24/7 and are quite ephimeral in nature, meaning keeping unused on prem kit makes less sense than doing testing in the cloud.
Alternatively, maybe you want your website hosted in the cloud for flexibility/scalability/availability, but you still host your database on prem 'cause the price of cloud hosted databases can be outrageous and difficult to plan for.
Truly any IT professional professing all or nothing on either option has something else in mind other than the cost/efficiency/effectiveness that comes with using all available tools for the best result.