I won't be buying one
until someone makes one that doesn't require a fondleslab in the dashboard to turn on the heated seats, or anything else for that matter.
86 publicly visible posts • joined 4 May 2015
This is distressing on so many levels.
I feel for the people still at IBM who have to suffer through these shenanigans for C-suite people to justify their existence. Which of course is only about making money, and if that requires squeezing 1% more out of payroll costs can be found in a questionable scheme, they are all over it.
I also feel for all the people that work at companies with equally suspect leadership that will see IBM doing this, and will fall over themselves saying. "Us too!" It creates a wonderful excuse when such a large tech employer sets an example for them. A vicious cycle.
Here, here. Our colo has passed on double digit power increases in the last year, but it doesn't change the reality that we save oodles of money running our own environments vs public cloud.
I recently got a quote from our used hardware reseller, and the same server we bought last year is about 50% less costly this year. The gap just keeps getting wider.
I'd love to see a push for public companies to show their technology budgets, if not only to show the wildly varying degrees between different industries and competitors within industries.
I am sure we'd see all sorts of places where the bean counters blew a ton of money so could make these idiotic statements like cloud-only enterprise, like that somehow makes them better than those that aren't? WTF? I'd be more inclined to not invest in companies that make these sort of nonsense statements.
This is a well reasoned article to a very simple question.
If a company can use the ability to grow 10x overnight, then Public Cloud is a great option. Most companies in the world don't fit this description. To the point of the article, a mid-sized Saas provider (I work at one) that has a relatively stable workload (we do) would pay far more using Public Cloud than on prem gear. We are a fine example. Our total costs are about 1/5 per workload to run on prem. We have some offerings that are in the cloud, and after seeing our eye watering cloud bills, we are looking at moving the most obvious ones on prem.
We already have an on prem environment we have to maintain, so all of the firmware upgrades etc mentioned are already going to need to be managed. I can say the same about having to have a well paid devops/SRE that understands and has experience with k8s or docker or all the esoteric services offered by AWS/Azure/Google that are not simple to understand, and those employees are paid very well, and when they leave they are hard to replace.
One thing mid and large companies are noticing is that the ability to just spin up machines in public cloud without any sort of budget oversight is that they blow up budgets. There is no one minding the til like they do with on prem gear, and while yes, getting through a requisition and the budget process can be slow and painful at some companies, if we have a customer contract that requires more hardware to fulfill, rest assure, the bean counters will move heaven and earth to get the gear to get the beans. Good planning would also allow for slack in storage and compute clusters to pick up an unexpected load.
And yes, systems can be turned off in the public cloud and the bill for them stops, but it doesn't stop for storage, and that were all the money is in most companies' AWS bills.
We have all the things in your list, with one exception, we use colo environments, which take care of the HVAC, Power, Physical security, and compliance for all of those things.
We are not a mom and pop shop, but we are also not a large business by any means. Redundant networking, VMware software, storage, compute, etc, etc, etc is about 1/5 of doing it at AWS. It's not even close.
We have some stuff running in AWS, and their instances to employee ratio is far smaller than the team that handles the traditional stuff, and the devops/SRE employees are paid very very well.
yes, some of the hardware vendors we use on prem keep telling us about their lease options because all the bean counters want opex, but we don't get far into the costing exercise before it's clear that leasing the hardware would cost far more than owning.
We are pulling stuff back out of the cloud and putting on prem after finally doing real apples to apples cost analysis of our systems. It's not even close.
And yes, we have slack in our compute and storage environments, but it's enough that if we have very fast growth we can buy more servers and storage and get them up in running in less than a month.
Yes, but the profit on selling diapers vs the profit on selling software is pretty significant. The revenue number just means they sell a lot of volume in a low margin business.
AWS though, has an enormous profit margin, because all these PHBs and PHBs that used to be devs are convinced it's the only option going forward, I mean, all their friends are doing it, so it must be right.
"Security is also something I think is better in the cloud, not inherently but because you have to follow good design practices and plan what you're doing and because your platforms will be kept current."
I have seen loads of setups where this is not even remotely the case, especially ones that started in the cloud.
Ten years ago the CEO/owner kept telling our VP of operations that we had to be in the cloud, even though we have nothing but saas applications we offer to customers (we were cloud before it was cloud).
We were marched to create a non-functional DR solution in the cloud that has cost (which everyone in the know said wouldn't work and would cost more) 5x what the on prem functional solution costs in other continents.
CEO/owner and VP are all gone. We are still paying for a non-functioning solution because?
Still peanuts compared to public cloud costs.
I work at a holding company with several saas applications that have been acquired over the last decade or so. Some are traditional design hosted in a colo (three tier web/app/db), some are shiny k8s/RDS implementations in the public cloud.
The costs for the systems in Public Cloud are at a minimum 5x what the same systems cost in the colo. The colo costs are one of the smallest spends in the colo equation.
Yes, we still need experts in storage, networking, computing, OS, etc. But these same individuals are put to work on the cloud stuff as well. No big surprise that the devs that put things in AWS don't think much about backups, DR, what's really required for HA, etc. Even if everything moved to the cloud, the same people would have jobs doing work in the cloud instead. Maybe a few would leave because it's not their skillset and they would do better elsewhere, and then you need to find new people that are cloud familiar to work on things, and they are not cheap.
Last and most importantly from a cloud vs. colo cost perspective, the cloud costs generally rise in a linear fashion. Add a new customer that needs more compute and storage, it's going to rise in a linear fashion. There is a greater economy of scale to be had in the colo. There is generally more slack in the compute and storage systems there, so it could cost nothing. Even adding a few servers and another shelf isn't going to break the bank when cloud is 5x more overall.
We are looking at more ways to move things out of cloud and back into colo. It just costs far less, and we already have the people.
As to the whole notion that AWS/Azure have AZs that recreating would cost a fortune do in a colo, but he colo model vs. the public cloud model are not the same. The colo is designed so that nothing is on hardware that isn't redundant or a cluster that cannot survive multiple host failures. There should not be site downtime at a colo. They all generally have blended internet connections as well.
AWS will tell you straight up that an app should be designed for multi-az fail over. #1 This sounds easier than it is, #2 - if a whole AZ goes down and all these apps try to start up in another AZ, it will be a small disaster. Storage in the other AZs will be under tremendous strain, and compute may be unavailable. This has already happened several times in the past decade with AWS, and everyone seems to forget about it. Heaven forefend one of these AZs is lost for several weeks, or forever.
In a colo environment, we have a DR site hundreds of miles away from production sites. Yes, it would take a bit to get it up and running, but it's there if we need it. The compute and storage is all dedicated to our company. We take our old production hardware and set it up as DR. It costs nothing, and since it isn't on, we don't have a bill from the colo company for electricity or power.
We have clusters of systems with headroom in both cpu and memory. If memory hits 80% across the cluster, we start the server acquisition process. It does not take long to get a server to the colo, racked and added to the cluster. The longest part of the process is the approval from the bean counters.
The hypervisor we use also has compression and deduplication for memory built in. Our storage array also has deduplication and compression built in. Some app servers deduplicate at rates of 90% when it comes to storage. If we turn them off, we pay nothing. Our storage environment also provides SSD level performance to every VM. Go look at the cost to provide a single AWS EBS with 20k iops. Now multiple that by 10s or 100s
And yes, we have some systems that are busier at different times of the year than others, but we are always constricted by memory, not cpu or storage resources. We have 20% free at all times, so even if we need to clone a bunch of VMs to keep up with demand, it's already built in and no additional cost is incurred.
All of this is available at 1/5 the cost of doing same in public cloud. The only hidden costs not included in that figure are the staff to support it. We also have cloud only apps in our company, and while they don't have infrastructure staff, they have SREs which is just the new term for systems engineering, and they don't work for less money.
As to security, your statement is just plain wrong. The security at public cloud vendors is only as good as their staff, and they are also a much bigger target. I also don't believe that if one of the major cloud vendors had a serious breech that the public would find out about. Of course they spend more money than we do on security, but that does not translate into our environment being less so.
I get that cloud is a good choice for some apps and some environments, but the nut of this article is that the magic bean notion that cloud would be cheaper in any way is just absurd. The absurdity is that it isn't even close. Devs make like it better, but for some reason that never translated into asking the question of why devs can't have an environment in house that would be as easy to deploy with. The bean counters where I work are asking that question now as we spend 5x more for cloud instances than we do for on prem.
My other favourite answer to the DR question for cloud apps is "If AWS is down,our customers won't care because so much else will be down."
Or if you ask the smoking hole question, the answer is, "If northern Virginia is a smoking hole, our customers will have much bigger concerns than their data being completely gone."
I agree with all of your points. We are a long time VMware customer, but only pay for support to get the ability to upgrade seeing as support is almost worthless. I say almost because I had issues moving to VDS this year and opened a ticket (hadn't opened one in three years as it was always a waste of time) and I got someone with a clue. He didn't fix my issue, but he pointed me in the right direction.
I am waiting for the day the PHBs say they won't pay for support anymore, and frankly I don't blame them. We never use the new features. I looked at the vSphere 8 features list, and quickly concluded we would gain nothing by upgrading.
What platform did you migrate to?
...that still uses an honest to God answering machine for one of their depots in the Greater Toronto Area.
I had a package stuck in customs for ages, and then it was stuck at the depot for ages, because the driver couldn't figure out which unit in an industrial building to deliver it to.
The depot would call me to tell me this every time they tried to deliver it, and I swear on the FSM that it was an answering machine every time I called them back. I wouldn't have been surprised if it recorded to a micro-cassette.
I had already had a very low opinion of fedex prior to that incident, and today I have an even lower one. I didn't think that was possible.
I think you missed the math here. We pay 10x the ANNUAL support cost EVERY MONTH to AWS for less than 1/3 of the instances.
Support for servers, FC, storage etc is still far, far less than that per year. I could show you all the spreadsheets. There is a reason why Bezos is one of the richest people on the planet, and that reason is AWS.
Every time we put up another VM in our environment, our cost is still the same. Granted, this only holds true until we need to purchase another VMware host or bit of storage, but with deduplication and size of hosts just getting larger and large, we keep doing more with a smaller physical footprint. Our on prem costs just keep going down.
Every time we put up a new instance in AWS, the bill goes up.
I like that you mention DR. There is this thinking that since one has deployed AWS in multiple AZs that they have DR in that region. People don't get that if one of those AZs goes TITSUP, there is not enough compute or storage IO in the other regional AZs to run that same load. It will probably make the other AZs keel over and die. This has actually already happened. Unless you are running instances in another region or zone waiting to take over for DR, AWS is not going to provide DR for free.
So no, once you reach a certain scale doing on prem in colo with real equipment, AWS is never close to the same cost. It's at best five times higher.
To some degree their idea that they can just maintain bigger customers and reap the rewards makes sense. We have over 3000 VMs that make money for our business. We just barely pay six USD figures in support each year, which in the grand scheme of things is a bargain considering what we recieve. We pay 10 the annual support EVERY MONTH for AWS, for about 1/4 the number of systems (granted, there is more that comes with that $1M a month, but it's still absurdly more expensive than on prem).
While there are alternatives, trying to move 3000 VMs of legacy applications to Proxmox or Nutanix or some other hypervisor sounds great until you look into the reality of how much work that would take to save a small portion of what we pay each year.
That said, I was not happy to see that Broadcom is buying VMware. I think it's all downhill from here. They are right though, some of us are locked in regardless.
As to the person interviewed that thinks VMware support is "OK" and that VMware thrived under Dell apparently was not a customer prior to being owned by Dell. Prior to Dell, VMware had one of the best support groups in the business, and is now the worst. I haven't opened a ticket with support in more than three years because it's a waste of time.
Anytime you try to discuss a concept and use "this one time" as an example for why something is a bad idea is generally a poor argument.
That is of course one of those times is a nuclear explosion, or a massacre that could have easily been prevented (oh wait, those are happening every day now).
People that do repetitive, easily quantifiable, laborius jobs like warehousing, postal delivery, and manufacturing greatly benefit from having unions for all the reasons discussed here.
There will always be the time I saw someone at a gas station using their food stamp card to buy beer example, doesn't mean people shouldn't get food assistance because of an asshole (who probably didn't really the beer with his food stamp card).
AWS is a ruthless corporation, and they are not cheap. Even their "cheap as chips" storage people tell me about isn't that cheap vs rolling your own. They also pretty much invented this cloudy world because they had all this spare hardware they owned for the holiday season ordering, and it was freed up the rest of the year. They still have free resources outside of Nov - Dec each year, I am sure.
MS makes tons of money on azure by forcing people into agreements to use the stuff. They have lots of agreements where people pay for azure, but never launch an instance there. Talk about free money. Oracle does the same thing with their cloud. I am sure IBM also forces some mainframe users to purchase IBM cloud credits that are never used.
Google has none of these built in advantages, and as people often say around here, cloud computing is just your stuff on someone else's computer. Computers still pretty much cost the same to obtain and maintain, and AWS needs to make money on top of that (those rockets aren't cheap!), so it's no surprise that Google is going to lose money on this for a while until they get more people hooked on it.
Agreed on Control Panel. There must be some secret UI school that tells designers to gather research on how users navigate products during their daily use, and then break all the functionality for change's sake.
If one wants to see another fine example, look no further than NetApp's botched release of 9.8 - so many things just don't even work, it downright fraudulent. Then notice how they have worked hard to take an incredible amount of information away from the display altogether.
Our lives have been forever ruined by the HTML 5 client. We held on to esxi 6 as long as we could stand it.
Writing "a justifiably unloved C# client" makes me wonder who the author spoke to that didn't like the C# client. I haven't met a person yet that doesn't long to go back to it.
While what you say is true, it eliminates many of the cost savings cloud does have to offer, namely using services instead of a server with an app on it (or many servers with many apps on them).
Being cloud vendor agnostic is extremely expensive. If you have complicated products, you now need to have experts in both or three cloud providers, and you have to have all the infrastructure pieces for them to work together.
There has been an unwritten rule for a long time to never run anything in AWS us-east-1 if one wants it to work without issues. I have heard this for well over five years now.
But the underlying issue here with the Kenisis service is like many of the other outages that occur at AWS. No one else in the world has systems like this, because they are proprietary to AWS.
Even if someone did have a similar system, where there are no issues with scalability, possible failures and how to fix them. no other company is operating them at the scale AWS does. Things will break, and they will continue to cause outages like this one for years to come. There is just so much they don't know about their products and services, because they haven't had a failure wit them yet.
It brings me right back to the EBS failures they had a few years ago.
1) EBS volumes for a whole region went TITSUP.
2) Smart AWS customers had their EC2 systems set to reboot in another region when they went down just like AWS taught them to.
3) EBS storage systems in the other regions struggled mightily to boot up all these newly starting systems, and those regions suffered tremendous performance problems that essentially blew up everyone's day.
How these people had not planned on the possibility of a whole region failing and starting to boot up in all of the other regions was enough to understand that these sort of outages will just continue to happen at AWS and the other cloud vendors.
This is the real issue. Devs who come up with some grand idea and get PHB approval and run off and build something with no involvement with security or IT teams.
They then stick credentials and PII in unsecured S3 buckets because they had to open up all the perms to get their app to work.
Security/Compliance/IT teams have no opportunity to help, because the aren't involved.
Several years ago, we had a bunch of devs that wanted to update their resumes and get into cloud.
One strategy they came up with to justify the move was to send our Director all the sev1 tickets from the last two years, and implied they were issues with colo/systems/networking issues.
Said Director asked us about the ticket, and after showing him the details, every single outage was due to their shitty code. Not a single hardware failure or systems/networking/colo engineering cock-up.
All of them work elsewhere now, and we are still happily on our own kit at least 1/10th the cost of public cloud.
While your argument is sound, the ability to quickly and easily provision environments at aws/azure/google is often so developers and project leaders can get around those pesky business processes that slow down "innovation".
We have lots of business processes in place, but it didn't stop developers from going out on their own and setting up business critical systems at AWS without going through any of the business processes that exist. Then they throw it all over the fence to the ops team when their are operational problems they didn't think through and the business tells them to hand it over (and its a steaming pile of poo, and comes with documentation. That's if most of the team even sticks around (the smart ones left because they did all this resume driven architecture to advance their careers).
I have not worked at an org that didn't have good visibility into its own on prem environments. You know what subnets the network and security engineers have provisioned, and from there its easy to scan the network and find any rogue systems. Any of these systems that go unclaimed can either be shut off or their ports can be disabled. Not so easy to do in the cloud.
Sadly, this is the same story everyone is dealing with in the cloudy world and developers.
Anyone who has let devs run free with a credit card or an account in the cloud winds up with stories like this. I am not suggesting that security and systems engineers don't make similar mistakes, but developers just don't think about, or have a lot of experience with locking down these environments, but the bean counters and CIOs that want to be "cloud enabled" and "flexible" and want to "innovate" just keep allowing this to happen.
We had a bunch of open S3 containers with data on it we didn't want in the wild. We got the email from AWS telling us about it, and it took days not only find out who was responsible for it, but who had the creds to do anything about it. It was a sole developer.
EBS is a very secret system that no one is allowed to understand. The only details AWS will provide about it are its general service and uptime/redundancy, but you aren't allowed to know how it works or how its redundant.
I can't understand why anyone in technology wouldn't want to understand how it works, but all these developers and PHBs seem to be fine with not knowing.
I had a days long discussion with a developer at one point explaining to them that I have never had a storage failure on a raid system in the more than 20 years I have been doing this at many different orgs, some of them fortune 500. When it finally dawned on him that its not normal for businesses to incur data loss due to disk failure, he was shocked (because he was a developer and just thought about things realted to what he knew, like his desktop computer).
No offence to any devs out there...
Yes, I get this sentiment - it does make sense when you imagine it, but there is still maintenance and operational work to be done with a cloud environment, albeit different maintenance.
And while the meatbags are expensive, so are the meatbags that one would need to hire to have an effective cloud deployment. Letting developers with 20 years of coding experience and zero experience working in operations or systems leads to badly designed cloud deployments.
Now the push is all about Hybrid deployments, so you still need the meatbags that run all the on premises gear as well as the meatbags that are dealing with the cloud deployments.
As to innovation - if a company has business needs that require innovation - problems that need solving, cloud is rarely the answer to the problem. Nothing that can be done by any of these cloud providers changes business needs all that drastically. The only case I can see it being a game changer is the ability to grow exponentially on demand to deal with slashdot effect as we used to call it.
Any other innovations can be done on premise just as well as in the cloud.
Hell of a lot cheaper. I had heard a good guide line is that once you hit $5K a month in cloud costs, doing it in colo is cheaper. I can attest to that.
We have colocation sites with 8 racks and using loads of power, and the service is less than $10K a month. None of these sites has had an outage in the 9 years I have been around.
Storage arrays are way cheaper than they used to be, and flash.
Servers are cheap, especially if you buy used ones.
Hypervisors are a commodity these days.
Internet bandwidth is cheap - certainly when you don't pay by the bit sent.
Lots of people are leaving the cloud as the notion of it being cheaper is just laughable.