-.- --. .-. --..

Caching, Parallelism ♥ Docker multi-stage builds

Most of the resources on the internet I ran across while reading about multi-stage builds tout the benefit of smaller images. While it is a great feature, I’ve had the experience of benefiting from other, possibly underrated, side-effects of multi-stage builds: caching and parallelism. In my opinion1, these two offer so much better user experience during quick experimentation cycles. A lot of this advice depends heavily on context, so this applies selectively1.

Background

Skip this part if you already know what multi-stage builds are.

Every dockerfile needs a FROM instruction, and it can take other local sources. For example, say I have the following dockerfile I call “base”:

# Dockerfile.base
FROM debian:stretch-slim
❯ docker build -t base -f Dockerfile.base

which may be used in other dockerfiles:

# Dockerfile.helloworld
FROM base

CMD ["echo", "'hello world'"]
❯ docker build -t hello -f Dockerfile.shell

❯ docker run --rm hello
# => 'hello world'

There are two main points of interest:

  1. Since the Dockerfile.helloworld is dependent on Dockerfile.base, everytime the base is rebuilt (and has some changes), the helloworld image also needs to be rebuilt.

  2. Opposite, but similar, logic applies when only the helloworld image has to be rebuilt: there’s no need to build the base image if it already exists.

And multi-stage builds are, at a fundamental level, similar to having multiple smaller base images2. Which brings me to the best advantage(s) I’ve seen for multi-stage builds: caching, and parallelism.

Caching

Consider the following Dockerfile; the objective is to install Go and rust toolchains—so that the Dockerfile doesn’t end up too long/complicated3. The built image shall be used in a CI server as the test/build environment for our fancy app4.

# Dockerfile.gorust

FROM debian:9.11-slim

RUN apt-get update \
    && apt-get install -y --no-install-recommends \
        curl \
        ca-certificates

RUN curl --proto '=https' --tlsv1.2 -sSf \
            https://sh.rustup.rs > rustup.sh \
        && bash rustup.sh -y --profile minimal

RUN curl --proto '=https' -sSf \
            https://dl.google.com/go/go1.13.7.linux-amd64.tar.gz > go.tar.gz \
        && tar -C /usr/local -xzf go.tar.gz

ENV PATH "/usr/local/go/bin:/root/.cargo/bin:$PATH"

First build:

❯ docker build -t gr .
[+] Building 204.9s (8/8) FINISHED
 => [internal] load build definition from Dockerfile                                               0.1s
 => => transferring dockerfile: 534B                                                               0.0s
 => [internal] load .dockerignore                                                                  0.1s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load metadata for docker.io/library/debian:9.11-slim                                4.0s
 => [1/4] FROM docker.io/library/debian:9.11-slim@sha256:412600646303027909c65847af62841e6a08529b  0.0s
 => CACHED [2/4] RUN apt-get update     && apt-get install -y --no-install-recommends         cur  0.0s
 => [3/4] RUN curl --proto '=https' --tlsv1.2 -sSf             https://sh.rustup.rs > rustup.sh  113.9s
 => [4/4] RUN curl --proto '=https' -sSf             https://dl.google.com/go/go1.13.7.linux-amd  67.1s
 => exporting to image                                                                            19.8s
 => => exporting layers                                                                           19.8s
 => => writing image sha256:0af25ec222574b0b4a7367e76f892b9f213efced4f01ebae0f79540b7dd9ff74       0.0s
 => => naming to docker.io/library/gr                                                              0.0s

Second run (without changing anything):

❯ docker build -t gr .
[+] Building 3.1s (8/8) FINISHED
 => [internal] load build definition from Dockerfile                                               0.1s
 => => transferring dockerfile: 534B                                                               0.0s
 => [internal] load .dockerignore                                                                  0.1s
 => => transferring context: 2B                                                                    0.0s
 => [internal] load metadata for docker.io/library/debian:9.11-slim                                2.9s
 => [1/4] FROM docker.io/library/debian:9.11-slim@sha256:412600646303027909c65847af62841e6a08529b  0.0s
 => CACHED [2/4] RUN apt-get update     && apt-get install -y --no-install-recommends         cur  0.0s
 => CACHED [3/4] RUN curl --proto '=https' --tlsv1.2 -sSf             https://sh.rustup.rs > rust  0.0s
 => CACHED [4/4] RUN curl --proto '=https' -sSf             https://dl.google.com/go/go1.13.7.lin  0.0s
 => exporting to image                                                                             0.0s
 => => exporting layers                                                                            0.0s
 => => writing image sha256:0af25ec222574b0b4a7367e76f892b9f213efced4f01ebae0f79540b7dd9ff74       0.0s
 => => naming to docker.io/library/gr                                                              0.0s

This is fine so far. Docker caches each of the layers (there’s a useful label in the output too!). However, the moment you change even a single character, the cache gets busted—even when there’s no change in the actual script content. For instance:

FROM debian:9.11-slim

RUN apt-get update \
    && apt-get install -y --no-install-recommends \
        curl \
        ca-certificates

RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > rustup.sh \
        && bash rustup.sh -y --profile minimal

RUN curl --proto '=https' -sSf \
            https://dl.google.com/go/go1.13.7.linux-amd64.tar.gz > go.tar.gz \
        && tar -C /usr/local -xzf go.tar.gz

ENV PATH "/usr/local/go/bin:/root/.cargo/bin:$PATH"

I moved up the URL part in the rustup build to be in line with the curl command, and that busted the next step where I install Go. This is a trivial example, but this happens—inadvertently, in a lot of cases—and is frustrating. Looking at the installers and knowing what they do, even if we’re sure that both steps are mutually exclusive, Docker can’t know that in this format.

Unless we use stages, that is. If go and rust installer steps were in different stages, docker can predetermine right at the Dockerfile parse step that they are independent. As long as each stage doesn’t directly use the artifacts of the previous one, cache busting in one won’t affect the other.

The same dockerfile can be written using multi-stage builds as:

FROM debian:9.11-slim as base

RUN apt-get update \
    && apt-get install -y --no-install-recommends \
        curl \
        ca-certificates

FROM base as rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > rustup.sh \
        && bash rustup.sh -y --profile minimal

FROM base as go
RUN curl --proto '=https' -sSf \
            https://dl.google.com/go/go1.13.7.linux-amd64.tar.gz > go.tar.gz \
        && tar -C /usr/local -xzf go.tar.gz

FROM base

COPY --from=rust /root/.cargo/ /root/.cargo/
COPY --from=rust /root/.rustup/ /root/.rustup/
COPY --from=go /usr/local/go/ /usr/local/go/


ENV PATH "/usr/local/go/bin:/root/.cargo/bin:$PATH"

Gotta ❤️ simple toolchains35

Parallelism

The possible separation of rust and go steps also leads to the possibilty of parallelization, which improves build times overall. This is a little hard to demonstrate using shell output, so I’ve put up a bunch of asciicasts showing the effect:

Demo

  • Fresh docker build of the above Dockerfile: Asciicast, Dockerfile. Note that the ruby and python build steps are running in parallel.

  • Rerun of the same command: Asciicast, Dockerfile. As expected, the entire run gets cached.

  • Rebuild of the above Dockerfile with the positions of rust and go sections interchanged6: Asciicast, Dockerfile. This build too is a cached one. Without multi-stage builds, it won’t be the case as shown in the example in caching section.


Although I used modern, simpler installers for the examples, a similar strategy could be used for dynamic languages. There might be a post in the future…


  1. …mostly humble, but backed by some experience and context. The docker images we use at work get used for setting up the environment for our build multi-tenant Jenkins build servers. These Dockerfiles have lots of build-steps for languages, so rebuilds of the images tend to be slow unless optimized well. This makes for a 💩experience when trying to change anything.  2

  2. The major difference between multiple base images vs multi-stage builds is everyone’s favourite feature of smaller builds (using COPY --from instruction). Achieving smaller images is also possible with multiple base images, and the pattern has a name: builder pattern, and is quoted in the documentation for multi-stage builds 

  3. In one of our work projects, I have to install Ruby and Python instead, which prompted this journey into trying to speed up the build. The steps involve compilation from source and the script is…complicated.  2

  4. …that requires 3 languages at minimum to build artifacts that end up as a website at some point. SMH. 

  5. Compiling Ruby and Python, or installing them using the debian official packages results in a folder soup of targets that need to be COPY-ed from. That experience is painful, at best. 

  6. The diff of the files. 

← Coping with flexbox PostgreSQL backup notes →