> And the community still has less than 30 days notice to work around all of this.
They should demand their money back.
Container biz Docker on Thursday scrambled to bandage a self-inflicted marketing mishap, its ham-handed discontinuation of Docker Free Teams accounts. "We apologize," Tim Anglade, chief marketing officer at Docker, proclaimed in a mea culpa. "We did a terrible job announcing the end of Docker Free Teams." We did a terrible …
I couldn't decide whether I should give you a thumbs up, or thumbs down. You are onto something, but your story is not complete. Docker uses user and download numbers to attract VC money, and Docker's business model is still extended womb gestation. The provided link "Docker Raises $105 Million to Accelerate Investments in Developer Productivity, Trusted Content, and Ecosystem Partnerships", as well as Tech Crunch "Docker makes comeback with over $50M in ARR two years into restructuring" never mention a word about profit/loss - only ARR (annual recurring revenue) using that a proxy for "success". So $105 million in funding is just kicking the bucket down the road - no clear path to life after cutting the umbilical cord.
Software service is not software - open source software service is a misnomer.
"Docker open source software" should probably be limited to the open source software and some scripting to turn into a docker container - the whole thing to be downloaded from a git server. That would do a lot for software security as well.
In the end, I gave you a thumbs up, with the understanding that Docker's financial model is still bogus because they are using their free and lower end plans to juice their numbers so that they may continue to suckle VC money.
Well you've missed the point spectacularly there.
We're not talking about Docker's business. We're talking about the organisations that are relying on a free service as part of their infrastructure that haven't even considered what to do if they lost that free tier support.
Is that a business you're running? Then fucking act like one.
We're talking about the organisations that are relying on a free service as part of their infrastructure that haven't even considered what to do if they lost that free tier support.
Well you're not getting it either.
Docker Inc has two businesses - the desktop client that wraps and extends the OSS docker engine to improve the developer experience, and the hosting of the primary docker image registry. These projects are not (educated guess) users of the desktop clients, they put images in to the registry that are used by other people.
These images are what give the registry business value! Saying they should ask for their money back is nonsense, Docker Inc needs these organisations to be attractive, which is why they are falling over themselves to fix this situation.
of Silicon Valleys bait and switch techniques.
Get people hooked on the free product.
Remove free product and replace with a lesser free product, but give you an enhanced* version for a fee.
*The same as the old free version, but with a few extra features no one wants to make it seem different.
We have some software IBM packaged up into Docker containers (instead of just giving us a normal Windows .exe install package).
Really pleased it reduced their costs in developing and maintaining their software while telling us to piss off when we had frequent problems with Docker in the early days.
Server 19 gave us native support for Docker containers which was less painful BUT Microsoft aren’t going to do this any longer. If we want to use the “Mirantus run time” (hint: we don’t) we need to pay them 900 bucks a year an instance.
IBM have told us to pay this trivial sum…
…for software we don’t want
…to run software we do want
…which IBM chose to package into wanky containers instead of just developing a native windows version
…to try to flog RHEL and Red Shift
…or their cloud
Absolute bait and switch…IBM fell for it…and have now passed the cost on to us.
Podman to the rescue!
At work, this is where we are heading as the Docker license fee for Docker desktop becomes substantial.
You can use a Dockerfile to build a Podman image, or you should be able to just run the Docker image with the Podman daemon.
https://developers.redhat.com/blog/2020/11/19/transitioning-from-docker-to-podman
https://www.redhat.com/sysadmin/run-podman-windows
On windows, WSL2 is used which is not without foibles, so if you're running a production workload, I'm not sure I'd recommend it.
> It just seems like yet another layer of abstraction that was never required in the first place.
Actually, it's less abstraction.
In a VM I have: Host-Machine -> Host-OS -> Virtual-Machine -> Guest OS -> Application, or 5 layers.
In a docker setup I have Host-Machine -> Host-OS -> Docker -> Application, or 4 layers.
And this is just counting layers. I am not even taking into account the difference in complexity between the Guest-OS and Docker here. I don't have to simulate an entire computers hardware, in another computers hardware, just to run an application. I don't need to provide resources to that simulation that may or may not be used. And the entire setup is based on instructions in plain-text files, instead of Gigabyte-Wasting templates or OVFs.
And how do you update a stack component that runs directly on a machine?
You have to update the software, which requires downtime. With docker, I can build the new image while the stack is still running, pull up the new container, and kill the old one, with a downtime measured in seconds. If my components function as microservices, I can even spin up the new container into the network of the old one, hook it in, and then kill the old container, resulting in an update with zero downtime. Sure, this can be done with baremetal servers or VMs as well...if I have multiple servers, which again makes things more complex than they have to be.
Also, running on metal / in a VM means the host for the stack has to provide all requirements, and has to be updated if these requirements change. And what if stack components have conflicting requirements? Oh dependency hell, how I don't miss it for a femtosecond. Containers solve that problem elegantly, by providing for their requirements themselves. All that is expected from the host system, is the docker-daemon.
Speaking of requirements, how do we specify these so they are easy to reproduce in an automated pipleline, and can be checked into a VCS? How do we make sure these specifications work independently of the platform? With docker, this is an elegantly solved problem: a collection of Dockerfile's and docker-compose files.
What abvout resource usage? Say I need several isolated stacks, eg. for different customers. Every single one of them requires a fraction of the systems resources. If I have to spin up new servers / VMs for each, Everything beyond the stacks need goes to waste. With docker I just pull them up on one server, every container is essentially just the processes running in it, plus some overhead from the docker-daemon.
I could continue for a while like this, but I think this conveys nicely some of the reasons why docker caught on in the first place.
> Docker and containers in general just make everything more of a pain
How so?
I can pull up our entire stack, which includes several databases, 2 separate networks, and over 2 dozen microservices, webservers, load balancers and message brokers, from docker images on in a few seconds. And that's on my laptop. Because I can do that, I can try a change in a hurry, destroy the containers, rinse and repeat. The software uses barely more resources than it would running natively on my machine, because that's actually what it does in the background.
How does that workflow compare to, say, VMs, which I have to clone from templates, eat up 2 orders of magnitude more system resources, are slower to start/stop/destroy, and network configuration is way more painful? How easy is distribution of VMs to production systems? With docker I check in the Dockerfile and composes and run the pipleline.