Docker Composing for Fun and Profit

Posted by Matt Farmer on February 20, 2016 · 6 mins read

Our application at Domino is complex to say the least. I think one of the best accomplishments of the engineering team so far is that the interface into our product is probably deceptively simple. But the mechanics of getting that experience right are – as you might imagine – consistently require a lot of deep thought.

As our very own jobs page states: “This isn’t your run-of-the-mill CRUD app.” And that’s true.

One of the difficulties that comes out is that as a result of the complexity of our application our development environment has deviated a good deal from our productionized environment. Pretty much anyone on the team can boot the app, run some basic jobs, and generally tinker around with it. But there are a number of things that won’t work quite correctly. Or worse, they work but very differently than they do in a live environment.

Luckily we have the capability to spin up a live environment pretty easily, but ideally we want to catch things (and generally be able to develop against the product) without involving AWS whenever possible. It’s a lot quicker to save a file and reload the application locally than it is to generate artifacts and deploy them to a server.

So I decided to go on an adventure with docker-compose yesterday to see how close I could get us to that reality.

It turns out I got pretty damn close. As of today, I can docker-compose up in our application and get an array of docker containers that have all the crucial bits of our applications. Along the way I learned some interesting bits that I thought I would share.

VirtualBox file sharing is not sbt’s friend

One of the most painful bits about my attempts to set this up were the abysmal performance characteristics of VirtualBox’s file sharing system. One of the nicest things about docker-machine is that it will mount /Users into the VirtualBox VM for you. So anything under /Users is directly mountable to a Docker container as if you were on a linux machine. Unfortunately, the fact that it’s going through VirtualBox causes all sorts of weird performance characteristics that aren’t happy.

To get around this I picked a folder on the VirtualBox VM that I wanted all our files to go to, and wrote a script that executes rsync over an SSH connection to the VM to push file changes to the root of the VM file system.

Using the environment variables that docker machine sets for me (plus one that I added), I was able to make it pretty generic:

rsync –archive –rsh=”ssh -i $DOCKER_CERT_PATH/id_rsa -o StrictHostKeyChecking=no” –exclude “.git” –exclude “*/target” $(pwd) docker@$DOCKER_HOST_IP:/

Executed from the root of my project, this will package and upload all the relevant files to VirtualBox. Note the DOCKER_HOST_IP isn’t a default environment variable. That’s one I defined like so:

DOCKER_HOST_IP=$(docker-machine ip $DOCKER_MACHINE_NAME)

All other references to the DOCKER_HOST include a bunch of “tcp://…” and other garbage like that when, sometimes, you just want the IP address.

This pretty much eliminated all my file system related performance issues and had the added bonus of putting an end to the ridiculous Play-auto-reload that annoys me to no end. (I’m officially declaring my intent to burn our Play app to the ground and replace it with Lift – but that’s going to be an entirely different blog post.)

Give it More Juice!

In order to get things humming nicely I had to give the VirtualBox VM some more juice. I upped its memory allowance to 4GB and gave it 4 cores to play with. You are, after all, running sbt in there.

Yo Dawg, I Heard You Like Docker

I’m using Docker to use an app that wants to use Docker. YO DAWG. 

But seriously, nesting Dockers, while possible, probably isn’t the best idea for active development because then you’ve got to shell into the container running your application to take a look at what containers its running and tinker around with them. It’s a much nicer experience to have that all available from my OS X shell with my normal docker commands. So, I did just that.

Turns out docker-compose is quite clever. If, in your environment section of your docker-compose.yml file, you define an environment variable without giving it a value it will pull the value from the currently running shell. So if you’re composing a container that has docker installed, you can tell that docker CLI inside the container to connect to the very same docker daemon that is running it very easily.

In the context of an entire compose file that looks like:

Next Steps

For our situation at least, we have a bit of duplication between what I’ve done to get our system running locally and our devops stack that deploys our servers. We’re going to be looking at de-duplicating some of that moving forward. The worst offender at the moment is a particular config file that’s > 200 lines that has to be manually altered when someone wants to run this setup on their machine.

That aside though, I’m thrilled that I got to play with docker-compose a bit and it has solved a very real problem of not having a realistic environment to test my code in inside OS X.

Next: Read the follow-up to this post, Dockerizing Development at Domino, Part II