[docker] optimise distributed builds with remote docker cache
Almost all organisations have to deal with distributed builds, wherever the chosen CI tool is.
Jenkins, one of the most used and versatile CI tool, is usually configured to distribute builds through a fleet of worker nodes.
Focusing on docker builds, these jenkins worker needs will end to have each one locally a copy of docker cache.
This usually generate disk saturation and cleaning jobs at the cost of reducing the developer UX.
One of the solution is use Docker BuildKit which is the default builder for users on Docker Desktop, and Docker Engine as of version 23.0.
So basically we are able to cetralize the docker cache storage to a remote (OCI) registry or to a blob storage (ex. s3) even if it is not released yet.
🤩 This will immediately improve the developer experience and reduce the overall usage disk for worker nodes of the CI tools in use.
🙃 The disadvantage that we can mention is of course the increase of network traffic and since most of infrastructures are in cloud environment, check if it will generate unwanted billing.
Buildx supports the following cache storage backends:
inline
: embeds the build cache into the image.The inline cache gets pushed to the same location as the main output result. This only works with theimage
exporter.registry
: embeds the build cache into a separate image, and pushes to a dedicated location separate from the main output.local
: writes the build cache to a local directory on the filesystem.gha
: uploads the build cache to GitHub Actions cache (beta).s3
: uploads the build cache to an AWS S3 bucket (unreleased).azblob
: uploads the build cache to Azure Blob Storage (unreleased).
Official doc 👇