Don't choose between two evils at all.
Sam Newman, a consultant and author specialising in microservices, told a virtual crowd at dev conference GOTOpia Europe that serverless, not Kubernetes, is the best abstraction for deploying software. Newman is an advocate for cloud. "We are so much in love with the idea of owning our own stuff," he told attendees. "We end up …
I work with a large group of bog-standard 'enterprise' developers who write mainly Java and deploy to on-premises Linux VMs. They already barely have a clue between them about the operating environment such as the basics like file descriptors, network sockets and JVM settings.
The idea of abstracting away all the difficult bits into the cloud can sound attractive but it seems to almost invariably end up working out more expensive, and it's still highly complex - just s different type of complexity.
If there's a major problem with your code you've got naff-all chance of diagnosing many types of problem in the cloud and then you're at the behest of whichever provider and SLA you've paid for. In my limited experience with Azure and paid support, even a major issue can end up bouncing around for days between support teams.
Meanwhile, with a very small but experienced group of DevOps engineers - most on-premises issues are diagnosed and fully resolved very quickly.
In fact we keep bailing out projects that were headed for the cloud but failed to get there in the end.
I keep hearing people rave on about Kubernetes and sure, it looks cool. But it's also seriously complex and really needs a whole stack of other tools to become useful.
On the flip side, "serverless" and Functions as a service might be applicable in a limited number of scenarios, and maybe they are more developer-friendly, but they're still hiding a massive nest of complexity behind them.
I totally agree with the previous comments about using the right tool for the job. In this case I think it's still just an ideology which will never work in practice - not because of a technical limitation but because of the need for people to understand it and completely reset their way of designing and writing software.
I think that this is exactly the point that Devs should not care about where their code gets deployed. If you do not have a decent architecture then you will screw up regardless of the environment. And if you have Devs doing the architecture, then you are probably screwed big time.
Devs should not care about where their code gets deployed
And this is how we got code that assumes >100Mbps network speed.
And this is how we got code that assumes <5mS RTTs.
And this is how we got code that assumes >32GB of memory available.
And this is how we got code that assumes >32 CPUs available.
It's a great theory, but it doesn't work in practice.
You absolutely DO need to know, at the very least, the constraints that will apply where your code gets deployed. Or it won't work. You can't abstract out of reality (unless you call it Brexit, of course)
But he conceded that serverless has restrictions. With functions as a service (FaaS), there are limits to what programming languages developers can use and what version, especially in Google's Cloud Functions, which has "very few languages supported".
I think there are 4: Go, JS, Python and Java.
Nowhere near as many as Azure's...5. And one of them is Powershell.
I get the need for simplicity, but have you actually done a large serverless project? With AWS, just managing the CI config for tons of functions is a complete nightmare. And if you have functions calling other functions, which is a pattern in serverless, how do you even begin to debug that when something goes wrong. I think it's mostly about the tradeoffs you want to make. We decided to go with K8s because we can solve a much larger set of problems with it than serverless--stateful services, for example. Give it a year or two more and all the cloud providers will have a serverless K8s option. AWS is going down that road with Fargate. Then for me the question becomes why even use serverless functions at that point, when container workloads can do all that serverless can and much more.
you start at the function that errored and work your way back through the logs.
Having an http log of every function call actually makes this easier than debugging functions calling functions in a monolithic app. They don't all emit logs.
New stuff is hard but so was the old stuff. You spent time learning the old stuff. Maybe it's time to specialise if keeping up is becoming tiresome.
Logging is only as good as you make it. No free lunch regardless of where the app is running. If logs are spread out, now you have to now have an aggregator to correlate all that. Just one more dependency.
You make serverless sound so easy. But when you build something totally serverless and have 70+ functions your deploying, in practice it just doesn't work. I've seen it. Eventually they just switched to a simple monolithic app that was super easy to monitor and debug and hosted it in a container (or something with pretty low maintenance). Isn't the cheapest option, but it was the easiest to reason about and maintain. Right tool for the right job which was the right fit for the size of team maintaining it. AWS, for example, provides XRay as a poor man's Dynatrace, but it is woefully inadequate. Just "looking at logs" is the old way, there has to be ways to easily trace entire business transactions through all dependencies and serverless makes that hard right now because unless you have something in your code that can plugin into the runtime, it's really hard to monitor. We pay for Dynatrace and run the OneAgent on K8s and anything else that is not serverless. It figures out our dependencies and tells us exactly where the issues are. You can't get that with serverless. I get the serverless concept and why it is cool, but I don't get how a spaghetti bowl diagram of 20+ interwoven AWS services (all "serverless") is better than a much simpler maybe more monolithic design. I'm for microservices as much as the next person, but microservices being good doesn't make monoliths bad or less maintainable. Nor does serverless solve all the things.
For 90%+ of the businesses out there...
How about two nice big hardware boxes (and an additional spare) located wherever you like? One with OpenBSD and relayd/httpd (and Python?) - Connected to the other running PostgreSQL on OpenBSD. Before I retired, my consultancy rate for something like that would have given you a nice report with lots of pretty graphs, and sufficient management-speak to look good for $10,000.00. I would even have worn a jacket and tie for some customers. Training would have been at my "bargain rate" (say another $25,000.00). A couple of jobs a year, plus ongoing consultancy; a few odds and ends; and plenty of time on the beach, or in the coffee shop - Sorted.
Cheap at twice the price :-) OK, an exaggeration, but not that much of one for the 90%+...
All these issues with public cloud being more expensive (on the long run) than keeping your own iron bog down to one simple measurement: how well they are managed. And I'm starting to suspect that there is a bias in the comments. Look, the reality is that Reg commentards are likely a notch or two above the "normal". Their "normal" is a understaffed enterprise team trying to support an overhemly number of legacy systems with a tiny budget and coming up favorably in a cost comparison because they are motivated enough to keep learning and applying common sense to complex problems.
Whereas the "normal" "normal" is a bunch of old timers that are still trying to get basic technology right (TCP, DNS, AD, etc), whose training has not been updated in the last decade, that fear change like a plague and that spent their days fire fighting their own created monsters. And in these contexts, public cloud is a blessing: cheaper, works almost every day and provides technologies you can only dream of in your current environment.
Of the monoliths I've worked on, I think most of them looked like: static front end that talks to an API which wraps business logic around a database. None of them used features of the OS. They could be hosted just about anywhere (framework permitting). However, having used lambda, I felt that the debug toolchain wasn't there but watching it closely as it could be a good fit for my current role in the future.
Cloud has many advantages.
The biggest advantage is that it is not a headcount item, so you can not downsize management of the servers. Servers do get patched. No FUD can delay that. Bad code will break, but you have to adapt. Inside company walls - own servers - you would be forced to roll back.
But it is expensive, comes with very little support. But usually runs better than self-maintained.
I like cloud for many reasons, but there are also risks associated with it. You have to compare internal operations risks vs cloud operations + cloud risks.
Many companies outsources or goes cloud because they can not get qualified staff to run things on-site. Devops (the new word for the classic sysadm) people are a rarity.
"The key characteristics of a serverless offering is no server management. I'm not worried about the operating systems or how much memory these things have got; I am abstracted away from all of that".
Technically true but massively misses the point. AWS Lambda requires you to decide how to allocate "memory to your function between 128MB and 3008MB, in 64MB increments" (https://aws.amazon.com/lambda/pricing/). So now you have to capacity manage memory at function level rather than server/cluster level.
There are lots of good things about serverless, but this ain't one.
Kubernetes is a lot like java. It was cool when it was young but then became a lawyer. Glassy eyes, fat, full of pedantic details and insanely boring.
Also similarly to Java EE and the likes you can create a black hole by amassing all the resources and books written about them in one place. Some believe this has already happened.
Seriously now. Kube first and foremost was/is the final attempt of google et all to curb AWS. I don't think it has much to do with productivity for any small/medium company out there that is dragged into its complexity. But it's very hard to not drink the cool-aid.