
Surprise
Deployment of workloads and dependency management are fundamentally hard. Adding more layers of abstraction doesn't make that go away.
Google has recognised that users struggle to configure Kubernetes correctly and introduced a new Autopilot service in an attempt to simplify deployment and management. Two things everyone knows about Kubernetes are: first, that it has won in the critically important container orchestration space, and second, that its …
AWS allows free usage indefinitely if you keep your usage low, with api fully available for the free-tier services.
GCP allows free usage for three months if you keep your usage low, and finding the api? I ended up giving up.
Google KNOWS that human error causes 80% of their own OMGs. Using the API is the only acceptable way to manage prod resources. And yet, they don't seem to want you to play at that level.
> Using the API is the only acceptable way to manage prod resources
That's just Google and APIs though.
It's not just GCP, their APIs and management of access to those APIs in general are a complete mess, often relying on you knowing the exact link to get to something
Fairly recently I was adding a NEST thermo to HomeAssistant. Just look at the setup instructions - https://www.home-assistant.io/integrations/nest/#device-access-registration - and then consider that there are steps/bits missing.
Google's problem with APIs has always been one of discoverability. The APIs themselves are (generally) solid, but actually finding the API you need to enable is a nightmare. There's a similar issue with their documentation - the APIs are well documented, but to find that documentation you often need to already know the API you need.
It's a shame, because they've got some incredible minds behind those APIs, but it really is hard to recommend building any dependency on GCP if there's an option to use AWS (or even Azure) to achieve similar ends.
That read like an advert to me.
Does sound like it adds some assurance, but deliberately removing ssh sounds like it’s appealing to those security types who have no clue as to how anything works or people do stuff but have heard about engineers ssh’ing into stuff and consider things would be safer if they couldn’t do that.
"removing ssh sounds like it’s appealing to those security types who have no clue as to how anything works or people do stuff but have heard about engineers ssh’ing into stuff and consider things would be safer if they couldn’t do that."
Yep, it also blocks mutating webhooks, so is useless for anything more than simple stateless service deployment
I'm torn, without SSH is avoiding in place debugging.
But if your remediation is replace the image, then I'm not against it.
You do end up doing spinning up a box with ssh, doing the build and saving a snapshot post config.
So its not so much without SSH, but running a previously configured image without SSH.
Remediation for these things IS to replace the running container. Using SSH to troubleshoot a running container in production is really missing the point.
Making a container a simple virtual server replacement is missing the point and asking for trouble. Fix the problem in dev and roll it to production. You shouldn't need SSH in production. It's a crutch.
And buy into a service and notice a few months later they change it. I was about to do a video on how to get unlimited drive space for just over £100 compared to the cost of Amazon Drive, Google Drive etc. It involved a little bit of knowledge, paying for a domain name, paying for small bit of hosting then paying the business package of Gsuite. Set it up and you have unlimited drive space for one user for only £11 a month, and if you set it up you can use the email, google docs etc. Altogether works out cheaper per year than Amazon drive etc.
I was on the basic package originally as last company used it. So wanted to keep the knowledge up. Realised the above was cheaper than the drive only packages and I could use it as offline backups for all my home stuff. Noticed on my bill the name had changed, so looked it up. Found out they'd scrapped my package (although I've not been forced off it yet) and if you want unlimited storage you now have to pay for the enterprise package.
Bastards.
It sounds like "enterprises" ought to be doing a little more research and not simply following a trend. Having an application broken down into a lot of microservices raises issues of authorisation and data consistency as well as deployment and resource management. All that comes at a cost; you need to identify the compensating benefit before you start "embracing" anything.
How many layers of bugs are we supposed to tolerate in this sodding baklava? Code is evil, strive to use less.
Personally I'm trying to learn Elixir/OTP. Basically a genuinely lightweight services architecture. (Nanoservices?) I'll see how that goes.
So, what's not to love? - the management of complex server architecture as a series of, on the surface, simple text files.
Hoorah for devops!
The problem here, is that understanding the complex architecture of networking and underlying resources of hardware, is ... quite an advanced skill set.
Sure, it can now be expressed with code, but that doesn't mean the underlying complexity is any less - it just means a n00b with hardly any knowledge of the complex architecture behind what they are doing, can construct a text file and fling it at, say, AWS, supposedly safe in the knowledge "it will just work"
More often than not, sure, it does. Layers of redundancy and hopefully some decent peer reviews will usually prevent serious P1 downtime.
But, as we all know, it's that 1% of cases that fling the shit at the fan - and the n00b coding infrastructure in a yaml file has zero understanding, really, of what has just transpired.
Devops is in it's infancy - and we absolutely need seasoned IT people who were around before these developments, to provide the understanding required to ensure the most uptime possible.
Out.