Docker seems pretty desperate for income
And it's not doing the software any favors. Are we seeing a variation of NPM's story here? (Everyone uses the technology, but no one wants to pay for it)
Docker will restrict use of the free version of its Docker Desktop utility to individuals or small businesses, and has introduced a new more expensive subscription, as it searches for a sustainable business model. The company has renamed its Free plan to "Personal" and now requires that businesses with 250 or more employees, …
Yes, they've chosen a strange revenue model. The image hosting must cost a fortune and that's difficult to replace. The Desktop Client will be gone and forgotten in a few months.
To make matters worse, the commercial Enterprise container software doesn't support ZFS because nobody has bothered to optimize Docker's interface to it. The ZFS driver is exactly what you'd want in commercial use. It can easily boost the storage performance/cost ratio 10x to 50x in a very large system... if a few issues were fixed.
Docker's innovation was in combining the underlying Linux fundamentals (namespaces, cgroups et al) in a user friendly package. The concept of containers was nothing new (heck Vserver was around in the early 2000's)
The other valuable bit is Docker Hub. The idea of being able to get any one of 8+ million images with a single command is attractive.
The problem is that Docker hub must be *really* expensive to run, you only have to look at the pull numbers for the images to imagine their cloud bill.
If Docker and Docker Hub went away suddenly, the number of CI/CD pipelines that would clang shut would be ... large.
Docker hub must be *really* expensive to run
Being a bit behind the container curve, I struggle to see why it exists, at least in the form it does.
Given that we have this Internet thing and hyperlinks and search engines, it shouldn't be necessary to bring everything into one place - a bit of agreement on metadata and Google* could point you at the source of your desired image.
In fact it's quite the antithesis of the concepts containers notionally deliver - scaling and robustness.
Of course it isn't just Docker (npm, GitHub, ...) but the idea that you can create some sort of centralised quasi-monopolistic dependency in the hope you can turn a profit seems fundamentally misconceived.
*Other search engines are temporarily available.
Docker Hub is (IMO) the reason that Docker was successful in the first place. Docker's tech. was hardly new, there have been Linux container solutions since the early 2000's.
Having a universally recognized app. store was the piece that unlocked quick adoption. Also to be fair to docker, when they started it, I don't think they had an expectation that they'd need to charge for it. Back then it was before Kubernetes won the orchestrator wars and when Docker were expecting to have enterprise sales.
Now they've been left as a developer tools company, it's one of their main assets.
Now other registries do exist and they work fine with docker you can use ECR, GHCR, GCR et al but you need to change all your automation to handle it :)
If you do some searching you can find that there are lots of images in Docker hub that contain pr0n and warez, etc. I'm surprised more hasn't been done to clean that up. I wonder if the storage and bandwidth used by such images isn't cheaper than the policing docker hub to prevent them would be?
I can't see why big corporations shouldn't be paying. They have been leeching off of the entire industry, pay poor wages, avoid taxes and exploit open source projects.
It is time their gravy train ends.
Big up for Docker!
I hope, though, that the sales will turn into better pay packages instead of execs squandering the profits in tax havens.
... trying to get them to actually buy something is such a nightmare battling the internal bureaucracy (at a cost in worker time that's way higher than just paying up already), that most developers feel under pressure to just find a free solution instead of having to face all the form-filling associated with trying to get the coffers to crack open. That's going to be Docker's problem I'm afraid.
Or worse they expect you to just code whatever you need in a couple of hours, you have all that time now you're working from home! This then leads to fudges, kludges and botches or untested chunks of code that returns to bite the company in the arse a few months or years later. Still who cares, that's someone else's problem, right?
Two things that I hate are per core pricing and per user pricing. In most cases the real costs don't scale linearly, nor do the benefits. So if hosting 200 users costs 95% of the price of hosting 2000 users, why should I be happy when asked to pay 1800 users worth of pure profit to the hosting company. Per employee is even more galling. I get that most of these companies are subsidizing smaller accounts this way, but charge me 50% more or even double, not 900% more, or someone with 200,000 or 2,000,000 users will just write their own and stuff it down your throat.
This is the only reason that I run my own mail server still, if you need more than 5 hosted mailboxes, it starts costing more than the hardware it runs on. It's one of the great examples of something that has gotten cheaper since the 90s, and still cost much more for no good reason. College textbooks being another.
Text books cost a lot because the paper version has such a small run. But I get the point.
I pay for my own O'Riley ebook subscription to get hold of the books I need. Spending less then I use too on paper books that sit on a shelf gathering dust.
Personally I'd like a textbook size colour ebook.
It might be easier to show compliance when licensing per employee. Some licensing regimes are so difficult that it I wonder if simply proving compliance costs more than the bare license fees. Per-employee also scales with company size, so that a large company will always pay more than a small one, rather than giving 10 developers a single VM with a single license and telling them to timeshare.
I agree, the really large companies not only will balk at the sudden expense, but they also have more ability to find or create an alternative.
Only just started playing with docker and I'm surprised at how long it takes to start a container up and the size some of them end up being.
If I had more time I'd probably build multiple VMs and use snapshots.
There only appear to be two advantages:
1. that it's simple to use someone else's base docker and add to it quickly. But that advantage disappears if you rarely make a subsequent change to what you are using.
2. Your containers contents are well defined in a docker file.
3. I have access to GCC using an Ubuntu container on a Windows machine so locked down that I cannot pin items to the start menu / task bar / create desktop shortcuts. Sometimes I need to write small utility programs and I refuse to use the retarded Intersystems tools that the company standardised on. But now that I think about it: surprisingly I managed to install the Linux Shell, and that actually works better.
Very easy to end up with massive containers, and a lot of it it very nuanced by the order you do things, but also in choosing a starter image container that is too big/unnecessary.
For example, it's common to see people go for the standard Ubuntu image without actually thinking.
when I started with it, I did exactly that.
Until a colleague (and websites) pointed out I didn't need it.
We do a lot of APIs and whatnot.
Switched from Ubuntu image to Alpine image. Same functionality, not a massive size (depending on your requirements - c-libs are the issue here)
A lot of the large image sizes are down to badly written Dockerfiles - most examples show the basics which become the "standard" & you get bad images.
However with some simply care & the use of stages you can keep the image size down to a minimum.
I do that with my builds, e.g. use debian as a base, then a stage adding gcc or other compilers, then a stage to do the build & install into a custom directory then a final stage extending the base (no gcc/sources) & copy the compiled version into the destination.
I prefer alpine when needed as it's far smaller but sometimes have to use debian as a base depending on the project.
Can't argue on the sizes, the local images registry gets pretty fat after just a few images are pulled.
Startup speed? You're talking BS, or the Docker host you have is way under spec'd. I have a 32Gb Windows box running 2 Ubuntu VMs in Virtualbox concurrenly with 12GB a piece. The VMs host Docker CE, I can spin up almost any docker image I already locally have inside 5-10 secs tops, that includes fat images like Jenkins or an RDBMS like Postgres that can easily demand 2-3GB memory footprints per container. That's the reason I like Docker, once you have the images on the box from the hub then spinning up and shutting down containers is mere seconds, way faster than trying to spin up VMs.
I assume those saying how hard or awful Docker is, are referring to building images. Docker took me a while to comprehend, because it's so different from how I normally think of software or use a computer. Once it "clicked," it seems easy.
A Docker container is an application run-time environment, a "bubble" within which the environment is controlled, where software and environment variables are independent from the host. Part of that bubble is a temporary disk that contains only what was specified in that image. (Just like in a traditional VM, you can also map folders from the host to exist inside the container.)
This is useful to run software that must behave consistently across different computers. For example, if the program requires a certain version of Java or a particular database client, or if it needs certain tools that you do not want to install on every computer. In one case, I used a Docker image of an older Linux release to host a certain obsolete software product on current Linux, so that we could retire the old server. It also makes it very easy to "reset" to defaults: just kill the container and start it anew.
The other Docker concept that broke my head is the structure of an image. It's not just a snapshot of whatever-you-got-working. It's built up in layers. Your base "CentOS" image is the first layer. Then you installed packages A, B, C and saved the image: another layer. Then you took the previous image and added D, E, F, G and deleted B: another layer. Finally, your new coworker used your latest image and deleted everything except package E. Layer 1 = 1GB. Layer 2 = ABC = 200MB. Layer 3 = DEFG = 300MB + (plus, not minus!) the change from removing B = 400MB. Finally, Layer 4 = changes from removing ACDFG = 400MB. When you deploy your ABC image, it's 1.2GB in size. When your coworker deploys their "only E" image, it's 2.3GB in size. (Then you fix it by basing a new image on Layer 1, and your deployment is 1.1GB.)
In other words, each layer is a delta of the previous one. If you select the appropriate base layer for your image, it's an efficient way to store many images or versions of an image. It's also easy to roll back to previous layers, which is a joy while prototyping.
Following my earlier high-level explanation, here's an example of how I build an image.
"docker run -it -d -v /home/me/mariatest:/mnt/data -p 8080:8080 --name test1 project/nginx116" (run a Docker container starting with my "nginx 1.16" image)
Inside that container, "yum install MariaDB" then test those things I put in ~/mariatest. Tests fail. I discover that I need MariaDB 10 or higher, and the base repo has only 5.5.
Exit container, "docker stop <id>" then that long "docker run" command again. Clean environment.
Add the EPEL repo and install MariaDB 10. Test again, success! Continue until I'm happy with the result.
Open my Dockerfile and literally add the steps I needed, "wget <EPEL>" "yum install MariaDB" "mysql_secure_installation" etc.
"docker build -t project/app-server:v1.0 ." (boom, that's the image)
"docker push project/app-server:v1.0" (push it up to the repository)
I understand they need to make money... but...
This is death by a thousand cuts really.
It wasn't long ago that Docker Hub charging was implemented and a limit on free pulls added. That caused all sorts of grief, and undoubtedly led to some people leaving the platform. In fact, we were seriously considering dropping it.
This though is different.
I'm a good example. I have some APIs I'm responsible for. These are Pythoned and Dockered.
They are my only responsibility, but equally I do probably two releases a year, and therefore probably need Docker twice for each release. Yes, it's only 5/7USD or so per month, but extrapolate that up by bigger teams.... for a few builds - is it worth it?
It's a been a good ride, got some use out of Docker and still do but I use it for the usual suspects, MariaDB/Postgres DBs, Jenkins, RunDeck and Grafana. Stuff that's useful when you need to prototype some stuff or test out a some code. So much crap on the Hub and I can image only a tiny percentage is pulled over and over.
I think Docker will go on but will become irrelevant in the next few years when the next big thing comes along. I love Docker to bits but you always have to be ready to jump when time is called.
Is this license change targeted at image creators? I use Docker to create an image once, then deploy that image widely. The deployments will still be free. The private image repository will still be free. Home users and hobbyists won't pay. Small startups won't pay. Is this really only for companies that use Docker Desktop to build Docker images as part of their product?
I admit ignorance; although I use Docker every day, I've never used Docker Desktop. I just run a base image, faff around until I know what I want, create a Dockerfile, then "docker build" and off to the races.
A lot of dev teams, particularly those who use containerisation in production, will use Docker Desktop as a means to get the run-time Docker tools onto their developers’ machines, to stand up a local copy of their own software for development and debugging. Images for deployment would be built by CI systems (with Docker build, or kaniko, Buildah, etc)
But, as someone else said, it’s a hard task to sell software that is perceived as “free”. Docker Desktop will be a ground-up choice (development teams making their own lives easier) rather than a business-led choice (a technology that improves the bottom line), so if there’s little or no development tools budget - or it has already been allocated to IDEs - then Docker will be SOL.
(I personally wouldn’t begrudge paying for Docker Desktop for Mac if they’d integrate networking on macOS. Having to manually configure and run “tap” is a big annoyance)