Using Docker for more on Apple Silicon

Recently I have been using my new M1 Macbook Air as my primary development machine. This is in addition to my primary Intel based desktop machine and so when I started the ritual of installing my standard development tools I hit a bit of a snag.

Like many others I store all of my dotfiles in github. This helps me get up and running on a server or new computer quickly. I can then execute a simple script to install my tools via scripts - except Homebrew is not fully ready for the M1... and many of the tools I need are not ready either.

I read through Sam Soffes excellent post titled Homebrew on Apple Silicon and installed brew in a separate Rosetta terminal but I then took a step back to think about if this setup was really suiting my needs at this point. I was on a laptop with a small hard drive and I honestly did not want to have to install a ton of different development tools and all of their different versions to support my projects.

I stopped and asked myself - How could I setup my system such that I didnt need to install all these tools in the first place?

I knew Docker was the current standard for packaging and distributing tools and I used it daily for deploying code but could I leverage it more? Could I develop a system that would let me edit on my local machine but then run, debug, and test in containers?

Expert Docker Fu

In the past I had read about the crazy Docker fu from the likes of Jessie Frazelle where she runs chrome, spotify and skype in Docker with GUIs - but the commands to spin those up at the time when I read them felt a bit out of reach for me.

I had a feeling this might really be an ideal way to go about things and over my winter holiday break I ran across a great blog post from Jonathan Bergknoff titled Run More Stuff in Docker.

After reading Jonathans post and spending a little more time with the docker documentation I felt more confident in being able to achieve what I wanted.

Here is what I learned along the way as I attempted to set up all of my development projects to run with tools that only execute in Docker. Freeing me from installing any sofware via homebrew, letting me edit in my native app of choice, and even letting me easily opt-in to using arm64 compatible tools when available.

Mounting your current directory into a container

One of the most interesting bits from Jonathans blog post was how to mount your existing directory into the container and to set its working directory to that very same directory inside the container. The example below will take your current directory, mount it in your container of choice (in this case alpine), and set the working directory. This feels like magic. Its like taking your existing directory and dropping it right into the container with no changes.

docker run -it -v "$(pwd)":"$(pwd)" -w "$(pwd)" alpine:latest

Side note: On the M1 this launches almost instantly which is truly amazing.

Isolating tools and credentials

I am constantly switching between AWS, GCP and other cloud tools that have CLI clients with persistent configuration and credentials on my machine. I am always paranoid about running a command against the wrong environment and hate the thought of all my credentials being in some random directory managed by the tool itself.

Because of this I decided to create some shell aliases for each environments. They each have their own configuration directory that I specify for credentials. This means each container has only the tools and the specific credentials it needs.

An example of this utilizes a simple alpine container that has both Terraform and the Google Cloud CLI tools installed.

#!/usr/bin/env dockerfile-shebang
FROM alpine:3.8
ARG CLOUD_SDK_VERSION=321.0.0
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH /google-cloud-sdk/bin:$PATH

RUN apk add terraform=0.14.3-r0 --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community

RUN apk --no-cache add \
        wget \
        python \
    && wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
    tar xzf google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
    rm google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
    ln -s /lib /lib64 && \
    gcloud config set core/disable_usage_reporting true && \
    gcloud --version

I can then launch this container with a unique local config directory to isolate the credentials and mount them for use by the CLI tool. In this case im taking credentails from a local directory at /opt/local/gcloud/{PROJECT_ID}/{ENVIRONMENT} and mounting them at root/.config which is where the gcloud CLI tools expect the config to be.

Note that when I first start this container that might be empty! In this case because its a volume mapped to my local machine as soon as I run gcloud auth login those credentials will be saved from the container right into my local directory.

docker run -it -v "/opt/local/gcloud/{PROJECT_ID}/{ENVIRONMENT}":"root/.config" \
               -v "$(pwd)":"$(pwd)" \
               -w "$(pwd)" \
               zsiegel/devops:latest

Using Dockerfile-Shebang

If you read the Dockerfile above carefully above you will notice a shebang at the top of the Dockerfile. This is a bit odd at first but it allows you to make your dockerfiles executable so you can skip the build step which would be required if you need a custom image like in the example above.

This also makes it incredibly easy to setup a Dockerfile for each project and have it run fully isolated. No more pulling down a project and installing its dependecies on your local machine. It can all be isolated in a container!

Jake Wharton has created a script in homebrew to support this if thats your thing or you can just drop it right into your PATH.

With dockerfile-shebang installed I can now isolate any tools I need inside a dockerfile that is purpose built for that project.

For example I have a number of node projects for static websites. In each of my projects I have a filed called {PROJECT_NAME}.dockerfile at the root. The contents of the Dockerfile might look like the one below.

#!/usr/bin/env dockerfile-shebang
FROM amd64/node:lts-slim

With this file in the root of my project I can then run the following to have a fully isolated project environment that can be edited on my local machine in my editor of choice.


# Make the dockerfile executable first - do this only once
chmod +x zsiegel.com.dockerfile

# Run the environment
./zsiegel.com.dockerfile -it -p 8000:8000 -v "$(pwd)":"$(pwd)" -w "$(pwd)" -- /bin/sh

# In the container
yarn install
yarn dev

The command above is straight forward and builds upon what I learned in the earlier sections. I run my Dockerfile which is an executable thanks to the shebang package (note the dockerfile will be built automatically if needed), expose some ports so I can access it locally, then mount my current directory, set my work directory, and finally ask for a shell prompt.

Using AMD64 and ARM64

Before I mentioned that some tools are running on Apple Silicon and others are not. Another benefit I have found is the ability to opt-in to arm64 compiled tools where available. If you look below you will see examples of three Dockerfiles that specify different architectures.

Default Architecture - arm64 on Apple Silicon

#!/usr/bin/env dockerfile-shebang
FROM alpine:latest
 uname -sm
 Linux aarch64

This dockerfile without the architecture designation will detect your current machines architecture and try to use an appropriate container - in my case on Apple Silicon this will be an arm6 architecture.

x64 Architecture

#!/usr/bin/env dockerfile-shebang
FROM amd64/node:lts-slim
uname -sm
Linux x86_64

This dockerfile has the architecture designation so it will have the x64 architecture specified and run under emulation.

Tool Versioning

If you are a developer I am sure you know the pain of having to juggle multiple different versions of java, node, python, ruby and other tools. With the above tricks in hand I can now properly ensure all of my projects are isolated against the versions of software that they need.

  • If project X needs an older version of Go? I will setup a Dockerfile with the right version.

  • My new project Y can compile against the latest version of Java? Perfect its just another Dockerfile away.

  • I want to try out that new library but it requires some strange old version of something? No problem, just run it in a container!

2021 Docker and Me

The newfound knowledge above has allowed me to really streamline my workflow and minimize the tools I have to install and manage on this new M1 Macbook Air. I hope to continue to learn more about Docker and leverage it for running my development tools on my local machine.