Reducing Docker image sizes and build times can lead to several benefits: lower storage and bandwidth costs, faster deployments, more efficient CI/CD pipelines, improved developer experience, and more.

While reducing storage and shaving a couple of seconds of build time off of a single image might not sound significant, it quickly adds up when you're building, releasing, and pulling images frequently and at scale.

Understanding the basics of Docker layers and multi-stage builds alone won't take you all the way to creating truly optimized images, but it is a good first step and can lead to huge improvements.

Layers

Each Dockerfile instruction that modifies the file system (RUN, COPY, ADD) results in something called a "layer." A layer is an immutable recording of the file system changes introduced by its corresponding instruction. When you run a container, these layers are stacked on top of each other to create the final image.

This design enables a lot of interesting functionality. Layers can be cached and reused, containers can make filesystem changes without affecting the original image, and you can run multiple containers from the same underlying image.

We're gonna touch on two things related to layers in this post that can lead to significant improvements:

  1. Why order of instructions matter
  2. Why what you do in each instruction matters

Why order of instructions matter

Docker caching works by using a unique hash for each layer. This hash is generated based on the content of the instruction and the context in which it's run, such as any previous layers or metadata of files that are added or copied.

When Docker builds an image, it steps through each instruction and checks if there's an existing layer with the same hash. If it finds a match, Docker reuses the cached layer instead of executing the instruction again. If it doesn't find a match the instruction (and all instructions following it) will be executed like normal.

We can illustrate how this works with an example. Create the following Dockerfile:

FROM alpine

RUN mkdir /test
RUN sleep 5

You can use time to determine how long time it takes to build the image:

time docker build -t test .

On my computer the build is completed in about six seconds (5 second sleep + some overhead). Your results should be somewhat similar.

If you try building it again without modifying anything in the Dockerfile you should notice a big difference. On my computer it took 50 ms the second time. A significant difference. Thanks cache!

Now let’s try modifying an instruction. Change the name of the directory that we're creating:

FROM alpine

RUN mkdir /party
RUN sleep 5

After updating, time the build process again:

time docker build -t test .

We're back to around six seconds. What happened?

Docker ran into an instruction that'd been updated and thus couldn't find a cached layer matching the hash. That's fine and expected. But why did it run the sleep-command again when it hadn't changed? It is because the hash isn't generated from the instruction alone, remember? The context (previous layers in this case) is also included. Docker will effectively stop looking for cached layers once it has encountered an instruction that wasn't cached.

Imagine a scenario where we have to change the name of that directory frequently. We'd have to wait for the sleep-command to finish every single time.

How can we improve the situation? Well, since the sleep-command instruction doesn't really depend on the name of the directory — or even the directory’s existence — we can safely move it up one line:

FROM alpine

RUN sleep 5
RUN mkdir /party

A small change, but because the sleep-command instruction is encountered before the instruction that's being updated Docker can use the cached layer even if the directory name changes.

You can leverage this knowledge and try to put instructions that are prone to change as far down in the Dockerfile as possible to significantly reduce the time required to build the image.

Why what you do in each instruction matters

Remember that each instruction results in a layer and that the image is a result of stacking these layers on top of each other? Well, that can lead to surprising results.

Let’s try create an example to illustrate how this stacking can affect image size. We'll start with an empty Dockerfile to establish a baseline:

FROM alpine

Build the image and use docker image ls to see how big it is:

docker build -t test .
docker image ls test

About 5.5 MB on my machine at the time of writing. Now that we've established the baseline size of the image, let's add some instructions to our Dockerfile:

FROM alpine
RUN dd if=/dev/zero of=/file bs=1M count=5
RUN rm /file

We use dd to create a 5 MB large file and then, in the next instruction, we remove it. Let's build the image and check the size:

docker build -t test .
docker image ls test

Almost 11 MB on my machine. How can that be? We removed the file!

It is because of the stacking of layers. It creates one layer that is ~5 MB for the dd-command instruction and then another layer that is ~0 MB for the instruction that removes the file. The total size of the image will be roughly the baseline plus the size of the first layer plus the size of the second layer.

One way to prevent this from happening is to delete the file in the same instruction as the one that creates the file:

FROM alpine
RUN dd if=/dev/zero of=/file bs=1M count=5 && rm /file

This Dockerfile results in a single layer that is ~0 MB in size as there is not disk space used after completing the entire instruction.

Final image size is often handled with multi-stage builds (see next section), but optimizing layer size still matters as it will reduce disk space needed during the build process, cache size, and improve overall build speed.

Multi-stage builds

Multi-stage builds in Docker help you create smaller, more efficient images by using multiple FROM instructions in a single Dockerfile. Each FROM instruction initializes a new stage, allowing you to copy only the necessary parts from one stage to another. This approach is particularly useful for building complex applications, as it lets you separate the build environment from the final runtime environment.

With multi-stage builds, you can build your application in one stage and then copy the relevant artifacts into a minimal image in a later stage. This keeps your final image clean and lightweight, including only what's needed to run the application.

Let’s illustrate with an example:

FROM alpine

# Generate a 5 mb build file and a 1 mb artifact
RUN dd if=/dev/zero of=/build bs=1M count=5
RUN dd if=/dev/zero of=/artifact bs=1M count=1

In the example above we’re using dd to generate two files: a 5 MB file representing some stuff required to build the project and a 1 MB file representing the artifact. Let’s build it and check the size of the resulting image:

docker build -t test .
docker image ls test

On my machine that results in a ~12 MB image. A large part of that is the /build file. We can optimize the image by creating multi-stage Dockerfile instead:

# Stage 1: Build stage
FROM alpine as build

# Generate a 5 mb build file and a 1 mb artifact
RUN dd if=/dev/zero of=/build bs=1M count=5
RUN dd if=/dev/zero of=/artifact bs=1M count=1

# Stage 2: Final stage
FROM alpine

# Copy the 1 mb artifact from the build stage
COPY --from=build /artifact /artifact

Building the image again results in a ~6.5 MB image on my machine. The final image only contains the parts we explicitly copied and is a lot leaner than the previous version.

Summary

If you plan your layers and use multi-stage builds you’ll be able to create much more efficient Docker images.

Want to know more about how we can work together and launch a successful digital energy service?