Multi-Architecture Docker Environments (Part 1)

The Problem

With Apple's not-so-recent move to their own chipsets in their laptops and, with that, a change in processor architecture (ARM) it is easy to find your team in a place where all members of a development team are working on slightly different machines. My team, for example, has 3 developers. One is on an Intel-based (x86) MacBook Pro, I am on a MacBook Pro with an M1 (ARM v8.5), and our newest member is on a MacBook Pro with an M3 chip (ARM v8.6).

To add to this, we are running on x86-based compute nodes in production. For the team member on an x86 machine, this is no problem. I, however, have been relying on Docker's x86 emulation layer for ARM machines. This effectively means that Docker was allowing me to use an x86 build image on my ARM machine. Enter the issues. While onboarding our newest team member, the biggest issue we were running into was that all of our containers were exiting execution pretty quickly with errors that were telling us that the host architecture wasn't supported (the errors were not so clear, but this is what they amounted to).

We boiled these issues down to the emulation layer in Docker (or Rosetta on the Mac) not working well either for our images or at all on the new machine. We came to this assumption by attempting to run with and without the emulation layer enabled, and by testing ad-hoc builds of the images targeting the ARM platform with Docker's --platform flag (e.g. docker build --platform linux/arm64/v8 .).

Working Out a Solution

It seemed like the path forward, then, was to continue to provide builds targeting the arm/v8 platform. This was doable manually by passing --platform linux/arm64/v8, as seen above, to the docker build command.

Ok, so we just build two images and push them to our registry as latest and latest-arm?

This isn't great. We now have two builds in the pipeline and people's local setups differ a bit based on their host machine. In production, we will use specific tags from the repo, so it's not a huge deal, right? Well, what if there was a better way? Or what if we wanted to take advantage of better cost/performance ratio servers like AWS Graviton? What if we have a service we need to run on an edge device and that edge device needs to be ARM?

Enter Docker buildx. buildx uses a BuildKit backend to optimize the process of building your image and the output of the build. We can pass multiple platform flags to buildx to perform multi-architecture builds. It also allows you to define your output type in the build command, which means by passing an argument to the command, we can now build and push in a single step.

BuildX will still take a platform flag, so we can still pass in our target architecture. But we can pass in a comma-separated list of target platforms. So, for our use-case, we are now passing --platform linux/amd64,linux/arm64/v8 to the docker buildx build command.

Note that we can achieve the same solution manually using docker manifest. Jeremie Drouet has a great article on this topic that I highly recommend you read if you are needing to implement multi-architecture builds.

Hiccups in the process

One of the issues we encountered in this process is that one of our services is built with a monolithic framework (Laravel, to be exact), and requires that npm dependencies get installed as a part of the build process. We were running into issues getting npm install to execute in our ARM build portion. We tried a few things suggested on StackOverflow and a few other forums to no avail.

My next attempt at resolving this issue was moving from npm to fnm (fast node manager). FNM is implemented in Rust, and is a bit more performant than npm is and as an upside they provide an explicit ARM build. This however didn't solve our problems for various other reasons.

So, I opted to use a multi-stage build for our service. We use a node alpine image to install our dependencies, and then in a later step, we copy those files over to the image we're building based on our base multi-arch PHP FPM server image (which is based on Debian slim). So our PHP service Dockerfile looks like

FROM node:20-alpine as builder
ENV app_dir /usr/src/app
RUN mkdir -p $app_dir
WORKDIR $app_dir

COPY . .
COPY .npmrc .npmrc

RUN npm install

# More processing in the builder image

# Build the actual image
FROM registry/php-fpm-base

# docker should automatically pass this arg for us
ARG TARGETPLATFORM

ENV workdir /usr/share/nginx/html

WORKDIR $workdir
COPY . $workdir
COPY --from=builder /usr/src/app/node_modules/ ./node_modules/

# other setup

The next thing to be aware of for us was that our pipelines run in a docker-in-docker environment. So we want to create a docker context to give a name to this "node".

docker context create multiarch-builder

And in order to isolate this build process and its dependencies, so as not to change the state of the shared daemon we need to create a instance.

docker buildx create multiarch-builder --use

BuildX also allows us to provide an output type, and one of the allowed types is registry. This is useful because if we define -o registry our buildx build command will automatically push our images and manifest to the registry defined in our tag! (This whole step is basically running two different docker builds in parallel, and then createing a manifest file and pushing all of this to the registry. More info on the specifics can be found in Jeremie Drouet's article linked above and at the bottom of the page).

docker buildx build -o type=registry -t <registry>:<tag>.

Final Look at a Solution

So to put all of the pieces together,

  • We have a multi-architecture PHP, FPM, and nginx base image
  • Each PHP service uses this base image, adds necessary dependencies and a custom nginx config and then executes its own multi-architecture build
  • We can now develop locally on multiple different architectures, scale our service to a diverse set of compute types based on what is cheapest and still meets our performance requirements, and potentially experiment with new ways to get our service closer to our consumers.

Our steps for our docker build are to create the docker context, create a builder instance in that context, and then execute our BuildKit build

docker context create multiarch-builder
docker buildx create multiarch-builder --use
docker buildx build --compress --platform linux/amd64,linux/arm64/v8 -o type=registry -t <registry>:<tag> .

Feel free to reach out to me on social media and give me feedback or ask me questions.

Helpful Links