As you know, I’ve started experimenting with Docker. One of the cool use cases we wanted to get in was provisioning Jenkins slaves through a Docker host.
Jenkins is our continuous integration server. It runs many different jobs to build and test the various parts of the platform. These jobs are usually run on dedicated slave nodes. We have different physical machines for each but sometimes we need more nodes. We already use Amazon EC2 instances to provision nodes. All the setup required is of course completely specific to Amazon. It would be nice to have a more ‘generic’ setup.
This is when the Jenkins Docker plugin comes in handy. You can use it to provision Jenkins slaves on any Docker host. And of course you can install Docker on Amazon EC2 or anywhere else. There are already several Ansible playbooks that can help you install Docker on remote machines. Just make sure that Docker runs on TCP with the following option:
-H tcp://127.0.0.1:4243.. This will be used by Jenkins to manage the different images and containers.
To set up your host(s), go to the configure tab in the Cloud section. The Docker option will be available along with other cloud providers like EC2, Jclous, Virtualbox, etc… If you select Docker you’ll be first asked for a name and a URL. My Docker host is on a machine called docker and the TCP port is 4243.
My configuration looks like this:
Once you have set up a host, you can set up different images to be used as Jenkins slaves.
Here’s the list of parameters:
ID - the id of the image you want to use
- Labels - the Label used to identify your node
- Credentials - the credentials used to connect to the docker image using SSH
- Remote Filing System Root - the home folder of the user used by Jenkins
- Tag on Completion - if true, this will create a docker image for each build you run using the name of the job as the repository id and the build number as tag
- Instance Cap - how many instances you want to run at the same time
- DNS - the DNS server to use in your Docker image
If you have followed closely, you have realized that the Docker image to use will need to have at least SSH installed as well as a user to login. To build my image, I’ve adapted an Ansible playbook that we use to build the EC2 images for our Jenkins slaves, but it’s really easy to build one yourself. Just take a look at the plugin page, the author explains everything you need to do.
All of this makes it really easy to have Jenkins Slaves on demand on any host you want as long as Docker is installed on it. And again, Ansible can be really helpful to put Docker on any remote machine. This way you get a ‘generic’ on demand slave setup. But this is not the only reason I find using Docker appealing.
One of the issues we have with EC2 is that the instances we use are automatically removed when the job is done. Sometimes we would like to use them again to better see what went wrong on a failed Job. If you have ticked the Tag on Completion box in your configuration, all job results are still accessible. Try running
docker images on your Docker host. You will see a list of images with the name of the job as REPOSITORY and the build number as TAG:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE nuxeo-master #55 e230007c2a40 51 minutes ago 2.506 GB nuxeo-master #54 42a7f8af7d50 52 minutes ago 2.506 GB nuxeo-master #53 680336ec5a9d About an hour ago 2.506 GB nuxeo-master #52 c611cc87e810 About an hour ago 2.506 GB docker-test #22 4c4e883e76f0 About an hour ago 2.491 GB docker-test #21 9c8c393f8ebb About an hour ago 2.491 GB ...
It means that now you can do something like
docker run -t -i -P nuxeo-master:#55 /bin/bash . This will open a bash session on the image used to run the job and tagged at the end of said job. If you look at the previous blog I wrote about Docker, it becomes even better. Because if you have set up something like VNC on your image, you can run the image as deamon and then connect to it with your VNC client. But that’s only if you’re not comfortable enough with a bash session :-)
Now all of this is really neat, but there is one small thing missing for me. Right now all the images are kept indefinitely. This can take a lot of disk space pretty quickly so it would be nice to be able to say something like ‘keep only the 10 latest images of the same job’. Of course you can always have a script running on your Docker host to do the cleanup but it requires setting up additional configuration on the host.
Keep in mind that this is only the first version of the plugin. I guess many other cool features are coming.