Reply to post: Re: Caching?

Ready for pull rate limits? Docker outlines 'next chapter' as Google tells customers how to dodge subscriptions

Tom 38

Re: Caching?

Docker doesn't make it easy to add caching. Lets say you have a common ci setup, with a private registry that you pull/push your built images to, and pulling public images from docker hub.

You can only configure docker to use a single upstream cache, so if you do, your cache also has to handle the authenticated requests as well as the public ones, as everything will go through the cache - this means injecting your private docker credentials into the cache. Typically, your CI task will run with credentials that only give it access to the images for that particular repository - now it has access to everything.

Then, there's the matter of local docker caches. Your CI itself is probably running inside of docker (docker in docker). If you expose the host machine's docker daemon to the docker containers running your CI task, then any private images pulled/built by different tasks will be available to any other CI job running on the same host. That's a security nightmare, so in most cases you run without the host docker daemon being visible, and therefore each job starts with a completely empty docker image cache.

See https://github.com/tiangolo/docker-registry-proxy#docker-itself-should-provide-this. It could be easier, but docker want to push people in to using them as their private docker registry (in which case everything works great). Don't want to use docker as your private registry? Harder. Restricting public pulls when they've engineered it to make it harder to cache public pulls IMO is a cynical drive to get people to use their private registry services.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon