There is a reason ...
... that I've been making very good money pulling companies back out of the quagmire called "the cloud" for these last ten years or so.
IMO, "the cloud" is a marketing meme that is long past its sell-by date.
Silicon Valley-based venture capitalist Andressen Horowitz has posted a paper suggesting that "If you're operating at scale, the cost of cloud can at least double your infrastructure bill." Partners Sarah Wang and Martin Casado researched the matter, which is based on an uncomfortable truth about cloud computing: that buying …
As with everything, "Cloud" has its use cases. As with everything that becomes the new shiny, marketing and PHB's with no clue sees it shoved into every location whether it makes sense or not.
For other examples, see also Composites, 3D Printing, IoT, etc, etc...
I think the study sums it up nicely: "you’re crazy if you don’t start in the cloud; you're crazy if you stay on it." The use case depends upon the age and size of the company in most cases.
At least until you get to be a large corporation and the beancounters take over.
... marketing meme that is long past its sell-by date.
Quite so.
+1
-------
cloud
/klaʊd/
noun: cloud; plural noun: clouds
1. a visible grey or white mass of condensed watery vapour floating in the atmosphere, typically high above the general level of the ground.
-------
The day IT lost sight of what the word really meant was the day common sense started to slide down the hill and beancounters took over.
Being beancounters, they ignore the fact that the ownership* of their company's data is inversely proportional to the amount of cloud services they buy in order to save money.
* the actual cost of getting your data back home once you realise that cloud services were more expensive than what you originaly thought.
Because, no one ever made a case for estimating just how much you'd had to put down to get out of the cloud.
O.
Because there aren't enough turtles to keep going all the way down?
More seriously, it's a question of scale. If building and managing a single Server Room costs $X, Building and managing a Bit Barn containing the equivalent of 100,000 Server Rooms does not cost 100,000x $X. Probably closer to 10,000x $X.
Therefore, it can be cheaper to hire 1 Server Room from the bit barn then to build your own. BUT, as with all things, the Bit Barn is there to make money and so you wont hire your server room for the 10% of what it would cost to build your own, the price will normally be set in such a way that it is just cheaper than building your own, but no so much so that it costs the Bit Barn profit. Hell often the price is about the same or even more expensive, but not so much so that your tempted to go and build your own...
The cloud company secret: Google buys cloud computing at Microsoft. Microsoft buys cloud computing at Amazon. Amazon buys cloud computing at Google. (Oracle buys cloud computing from the lawyer department).
It is a perfect circle. A unity of forces. Everything else is a marketing ploy.
This post has been deleted by its author
It comes down to scale
If you are a small operator, then you will find it expensive to employ the sorts of skills that are needed to provide a reliable and secure setup. In principle, the big cloud operators can buy all the skills needed - and top notch skills - in all areas and share those skills across all of their customers.
So as a small operator, it makes a lot of sense to use cloud - you will be using systems backed up by engineering and technical skills you can only dream about having in house. If you are a large operator, then you get the economies of scale that allow you to employ your own skilled people. Somewhere in between, I guess you're stuffed in that zone where cloud is expensive, but then so is getting in the right skills needed.
Sure, but they are my engineer tasked to care of my systems and not thousands more...
Now a SMB may not have the resources to pay for good engineers, but if you're a company large enoigh should hire them while reducing the number of beancounters.
Yes that's the point.
If you have consistent needs for a fixed amount of infrastructure you build your own.
If you are growing quickly or have spiky demand you pay proportionally more for flexible cloud services
It's the reason your company probably doesn't run it's own airline or fleet of delivery trucks.
The fact that workloads can be much cheaper on-premises or with smaller hosting companies is not new, though, and the continuing remarkable growth of the biggest public cloud providers implies that this prediction is not a safe one.
And once they have your data it's rather expensive to get out back again. One of the biggest vendor lockins of all time!
Vendor lockin icon -->
Another interesting analysis of the cost.
"The exact savings obviously varies company, but several experts we spoke to converged on this “formula”: Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud. Furthermore, a director of engineering at a large consumer internet company found that public cloud list prices can be 10 to 12x the cost of running one’s own data centers."
https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/?fbclid=IwAR1hpZphv8zFhyNfyYnczS_GNNCqOIa7KB-BqRleWSWyZ8-x0WadmbUXX5g
:-|
If a company is NOT annually reevaluating their networking costs, then they are loosing revenue. There are aspects that are better left in house and those moved to the cloud. When loads swing wildly, such as ones customer interface, then the argument for running it in the cloud makes sense.
If it's corporate finances and engineering work, as examples, with fairly well known loads and relatively small foot prints, compared to the customer interface, that should be in-house. Also, from a security point of view, having ones family jewels not under ones control has always fallen into my "Bad" box.
The "cloud" has its use cases - the main plus, as others have pointed out, is if you have wildly fluctuating demands for processing / storage. The archetypical example is a public-facing web site that usually gets a low level of traffic, but for one or two days a year, this goes up by orders of magnitude. Being able to scale this on demand, without having to invest in a lot of redundant infrastructure to cope with peak demand means that most of the time, you'd just be paying for the low usage. Of course, the cloud providers will have a formula for how much they can get away with charging you before this no longer becomes cost-effective, and will probably charge you a high percentage of that level, way above their true costs, because $capitalism.
The other common use case I can think of is software testing. When you are going through a dev cycle, it is often very useful to be able to spin up a number of VMs in parallel when you need them, and turn them off the rest of the time. Dedicating your own hardware to this would basically mean having a server room full of unpowered racks a lot of the time.
For anything where you have no reason to put your data on "someone else's computer", the simple answer is "don't". Things with a predictable, steady load, such as accounting, or POS systems. HR systems, and to be honest, most business software, has no purpose being under someone else's control. After all, even data centres burn down sometimes.
Just make sure that your in-house solution has appropriate redundancy and backups, and you employ experienced staff who know what they are doing to maintain it. If you're on a cost-savings spree, short-sightedness can cost you dearly. "Yes, we do need that DR site".
>The other common use case I can think of is software testing. When you are going through a dev cycle, it is often very useful to be able to spin up a number of VMs in parallel when you need them, and turn them off the rest of the time.
Sadly my experience of these things is that usually there are many layers of company bureaucracy in being able to deploy / get access to VMs on a cloud provider, that the "when you need them" and the "when you actually get them" can be significantly distant times.
If you're lucky you might half a CPU with 1GB of RAM a week on Sunday at 10am, well, 10am in the time zone of the cloud administrator. But only if there's a full moon and only for 30mins before you need to raise another ticket reminding them that your request to have it for 12 hours was because you specifically stated it was going to run a 12 hour test and no, giving me 2 machines won't make it run in 6 hours.
"The archetypical example is a public-facing web site that usually gets a low level of traffic, but for one or two days a year, this goes up by orders of magnitude."
I hope that isn't the same one or two days a year every other public-facing website gets a traffic spike. Because if it is, your cloud is more likely to fall over than to scale.
> The archetypical example is a public-facing web site that usually gets a low level of traffic, but for one or two days a year, this goes up by orders of magnitude.
The sensible option in that case would be to have a combined hybrid (private/public) setup.
That 'baseload' level of capacity can be provided by your own equipment (on-prem or in a 3rd-party datacenter where you lease space for your own equipment), with spike/overflow being load-balanced to a commercial cloud provider. The best of both worlds - cheaper 'own' infrastructure for baseload, but spike capacity from cloud providers so you don't waste money on your own infrastructure that is idle most of the time.
You have to do the cost/benefit analysis on that. A hybrid solution brings with it complexity. Complexity brings points of failure and potentially additional vulnerabilities which need to be analysed and managed. Sometimes the simpler solution, although more costly, is more resilient (and sometimes it isn't). I wouldn't like to say, in all honesty, which is the better solution, and having this as a simple example of a use case for "cloud" is necessarily simplifying things somewhat.
I don't think so. It's not private, as it's the thing all your customers download and run on their equipment. It doesn't contain any sensitive information, and if it does, that's the biggest of your problems. More than that, you have the code for that elsewhere, so it doesn't matter much where you run that code because you can stand it up elsewhere in a disaster.
> ones family jewels not under ones control has always fallen into my "Bad" box.
A bad comparison from my point of view. I have kept key valuables and documents in a bank safety deposit box for decades. In that time I have had two house break-ins but the bank's vault has remained secure.
One of the big benefits of clouds is the ability to bypass processes and approvals you need with physical tin.
Large companies required any kind of hardware spend to be capex which if you're even spending a penny thereof usually requires approval and sign off from finance and a host of other departments. With the cloud, most managers can just increase their opex footprint without any of the hassle associated with capex. This becomes really handy in companies which have reached the capacity of their data centre - you don't want to fight and be billed for an entire new hosting location just because you want to add one more server.
So yes, on-prem is cheaper but only attractive to managers in a large company if that company is excellent at capacity planning and has streamlined approval processes that are reasonable.
> So yes, on-prem is cheaper but only attractive to managers in a large company if that company is excellent at capacity planning and has streamlined approval processes that are reasonable.
There are, though, those companies that choose to have the worst of both worlds.
You get the "flexibility" of cloud, but the purse strings are being held extremely tight, and you need to go through rounds of approval for any new instance/cost.
First, as mentioned, renting servers is a lot like renting a car. There are a lot of maintenance expenses that the owning company has to pay, and unless you factor those in, you're not comparing like with like.
Early on, the expense of even one "IT guy" to manage a single server is exorbitant. Even for a book seller. You pay someone else to worry about those things.
But you have success and the company grows. You're no longer a fly-by-night organization, people expect your website to be up. All the time. That's the next thing. Reliability isn't magic, but it isn't cheap (at all) until you get significant scale. If your company needs 4 nines, Google & Amazon have the engineers dedicated to make that happen for you. You can not do that yourself without spreading your servers across a bunch of physical locations. Do you think you can hire the engineers to make it happen yourself? Are you ready to pay $160k salary + serious bonuses + serious benefits + serious stock it will cost?
That 30% profit margin reflects the fact that the barrier to entry for reliability is intrinsically quite high. If you don't have the money, and you need the reliability, then make sure your business can pay the bill.
As some have mentioned, hybrid solutions are going to make sense for many companies. Especially if you have a steady stream of work that it not time-essential.
--
But the responses to the paper are as bad. "Engineers are expensive"--are they cheaper for Amazon & Google? The customer always pays the costs--or the supplier goes out of business.
The engineering cost is a key factor, but if your business has those folks for other reasons then getting them to spend a small amount of their time on the feeding and watering of your servers makes sense.
But as you say, for small non-tech businesses, or non-core stuff, it can be well worth the cost for a managed service (e.g. non-classified email, accounting package, etc).
The article is dead on about the cost differences - at scale for private deployments. Keep in mind systems like OpenStack have continued to improve to the point where companies already sell it "on demand" as well.
What that article seems to me is a call to arms for all of the middle tier hosting, Data Center, and private cloud providers.
They don't need the massive margins of the mega clouds but many can get to a scale that maximizes their engineering investment in their cloud systems. Maybe 30M/yr in revenue from a set of companies would more than pay for a talented set of cloud engineers and hardware/DC/etc.
"Over the last few years, alternatives to public cloud infrastructures have evolved significantly and can be built, deployed, and managed entirely via operating expenses (OpEx) instead of capital expenditures."
You can build you own data centre without CapEx? OK, I've missed something somewhere.
The analysis also doesn't seem to look at the huge pain of tech refresh. And every time we've done one you finally come to a server that nobody remembers but everybody is too scared to turn off.
The policies in Cloud are a powerful tool.
Beancounter: This Cloud Subscription. They've hiked the price. Do we need it?
Manager: Not sure. Better carry on with it.
Fast forward 12 months.
Beancounter: This Cloud Subscription. They've hiked the price. Do we need it?
Manager: Not sure. Better carry on with it.
Cloud: It's the new server-that-nobody-remembers-but-everybody-is-too-scared-to-turn-off. The server-that-nobody-remembers-but-everybody-is-too-scared-to-turn-off just needs to be fed electricity, whereas that Cloud subscription could be burning serious £££.
Casado and Wang are talking about startups. A particular sort of startup -- a Andreessen Horowitz client, with a lot of venture capital cash and with lots of compute and storage at the core of what they do. In that case, the $1k/sq.ft to pay for a datacentre build can be funded from opex savings from cloud. They are simply paying a cloud vendor that much.
Quinn's counterpoint is really about venture capital too. His argument is that first-to-market is everything for these types of startups -- that promise of future monopoly profits is *why* Andreessen Horowitz is giving the startup those truckloads of cash. Everything is secondary to building market share, even if that's paying too much for compute, because stopping to re-engineer compute is worse. Quinn's other argument is also very Silicon Valley -- the shortage of "good" people, which really means engineering staff across the current Silicon Valley toolset who live on the US west coast.
This is such a peculiar environment that lessons for those of us outside of the hothouse aren't that applicable.
For a start a more typical business might be more interested in cloud applications rather than cloud compute and storage. There's certainly cost savings in cloud email and specialist applications such as accounting, HR systems, payroll, and client relations.
Secondly, us mortals have a range of choices between the extremes of DIY datacenter build and cloud. There's a lot to be said for the midpoint of hiring racks in a datacenter, and then arranging two diverse dark fibre services to that. There's also a lot of be said for taking the lessons of the cloud, such as easy to provision VMs and containers, and making that available via corporate infrastructure -- in other words, OpenStack.
The reality is that it takes fewer people to run cloud infrastructure than on-prem. And that alone will blow away any on-prem cost savings. I've heard this argument alot in the last 10 years from a variety of B-series startup people but it never includes the TCO, just capex vs cloud opex.
My guess is that A16Z will shortly announce a 9 figure investment in an 'on prem cloud' startup.....
Depending on the definition of 'on prem cloud', (if for example it is supplied by a cloud company) then there could be a few big downsides to it (1) if there's a problem with it you can't simply pull it apart to recover the data on it (2) presumably you are responsible for anything that might happen to the kit whilst it is in your jurisdiction (e.g., a fire, flood or theft) (3) if your connection to the outside world dies, do you have access to your on-prem cloud data? Or does it need to phone home to check you are entitled to prod it for data?
(Not my downvote btw)