I’ve written about Continuous Integration at Kabisa before. Recently we’ve upgraded our CI environment to be even more awesome. Here’s how.
The old setup
In our old CI setup we had a single Jenkins master and multiple slaves. Slaves were provisioned using Puppet and would contain all required dependencies to run all possible projects we had. As you can imagine these slaves are quite heavy weight since they’re running all sorts of services like Postgres, MySQL, ElasticSearch etc. This becomes really painful when you start having projects that require different versions of services or when you want to run multiple instances of the same job in parallel.
Over the last year Docker sprung up as a technology. Docker is a technology to manage and run lightweight Linux containers. Containers can be booted in milliseconds and provide full isolation of the filesystem, network and processes. A couple of months ago I realised Docker could be the solution to the issues we’ve been having with our old CI setup and started looking into ways to integrate Docker with Jenkins.
Integrating Jenkins with Docker
I started by looking at existing Jenkins plugins that would handle this, but wasn’t pleased with the existing solutions. For example the Jenkins Docker plugin requires Docker images running SSH and provisions those containers dynamically as Jenkins slaves which is in my opinion needlessly complex. I also wanted to integrate Docker as seamlessly as possible, without requiring team members to have a lot of work settings things up.
The new setup
To accommodate for our specific requirements I decided to integrate Jenkins and Docker using some custom scripting. The scripting takes care of building Docker containers during execution of the Jenkins job, running the tests inside the container, providing ways to cache dependencies like Rubygems and cleaning up the container.
Each project now contains a CI Dockerfile that describes the environment required to run the projects tests and the Jenkins job configuration merely contains an invocation of our scripting that takes care of everything. An additional benefit of this approach is the fact that each project now contains a simple definition of the environment required for development and test. You can use the CI Dockerfile just as easy on your local machine, without having to go through hoops to setup your local environment. Changing CI servers also becomes much easier, since the Jenkins slaves only need to be able to run Docker containers and nothing else.
With all this in place a typical Dockerfile for a Rails app with a Postgres database, Rspec and Cucumber specs looks something like this:
RUN apt-get update && apt-get -y install \
curl libssl-dev \
zlib1g zlib1g-dev \
libxslt-dev libxml2-dev \
xvfb nodejs-legacy \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV CONTAINER_INIT /usr/local/bin/init-container
RUN echo '#!/usr/bin/env bash' > $CONTAINER_INIT ; chmod +x $CONTAINER_INIT
RUN sed -i 's/md5\|peer/trust/' /etc/postgresql/*/main/pg_hba.conf
RUN echo 'service postgresql start' >> $CONTAINER_INIT
RUN gem install bundler
RUN bundle config --global path /cache/
RUN echo 'bundle config --global jobs $(cat /proc/cpuinfo | grep -c processor)' >> $CONTAINER_INIT
RUN gem install rubygems-update && update_rubygems
ENV BUNDLE_GEMFILE /workspace/Gemfile
RUN echo 'Xvfb :0 -ac -screen 0 1024x768x24 >/dev/null 2>&1 &' >>
Now if you would like to have a setup like this you can get started with our scripting published on GitHub.