Installing Kubernetes: Moving from Physical Servers to Containers


Thu 03 March 2016 By Rémi Cattiau

For years, I had physical servers with basic services, such as Apache, Bind, MySQL, and PHP. Once a while I migrate a server because another provider offers a better price. Updating can be hard when there are multiple websites hosted on the server because some of them rely on specific versions of PHP. Because of these issues, I often end up keeping some servers and avoid the pain of migration. This scenario might seem familiar to most of you!

I have been following Docker ecosystem for a while, and I knew my problems would be solved if I migrate to containers. That was when I migrated from 3 Gentoo hosted in France to 2 Debians hosted in the US and 1 Gentoo hosted in France.

KubernetesToday, I will explain how to install Docker and manage the containers using Kubernetes.

There are several systems to manage containers: Amazon EC2 Container Service, Rancher, Kubernetes, etc. I chose Kubernetes because it can be installed on several environments and you won’t be stuck with one provider. At Nuxeo, we use Kubernetes too.

Installing Docker


First step is pretty easy.

Installing Docker on Debian:

apt-get install docker.io

Installing Docker on Gentoo:

emerge -v docker

The next step is to install Kubernetes.

Installing Kubernetes


If you use a simple docker installation procedure, you won’t have a cluster and you will lose the advantages of Kubernetes.

The multiple docker is a better solution. It first uses a Docker service to launch etcd and flannel to enable the shared network and allows Kubernetes to store and share its configuration.

[email protected]:/home/shared# etcdctl member list
eacd7f155934262: name=b5.loopingz.com peerURLs=http://91.121.82.118:2380 clientURLs=http://91.121.82.118:2379,http://91.121.82.118:4001
2f0f8b2f17fffe3c: name=c2.loopingz.com peerURLs=http://198.245.51.134:2380 clientURLs=http://198.245.51.134:2379,http://198.245.51.134:4001
88314cdfe9bc1797: name=default peerURLs=http://142.4.214.129:2380 clientURLs=http://142.4.214.129:2379,http://142.4.214.129:4001

You now have different networks:


  • 0.0.0/16 represents the Kubernetes services / load balancers

  • 1.0.0/16 is where the pods will be created each node of the cluster will have a 10.1.xx.0/24 for their pods

flannel.1 Link encap:Ethernet  HWaddr c2:67:be:06:2c:11
inet addr:10.1.72.0 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::c067:beff:fe06:2c11/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:5412360 errors:0 dropped:0 overruns:0 frame:0
TX packets:4174530 errors:0 dropped:21 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1651767659 (1.5 GiB) TX bytes:3453452284 (3.2 GiB)

Installing GlusterFS


Depending on your pod configuration and the node selector, the containers can be created on any node of the cluster. It means you need to share the storage between nodes. You have different solutions, such as Amazon EFS ( still in beta ), Google Cloud Storage, GlusterFS, etc.

GlusterFS is an open solution for shared storage. It runs as a daemon and use UDP port 24007. In my next blog, I will share more details on installation of GlusterFS and benchmarks that I set.

Installing Firewall Rules


To update the firewall on all the servers at once, i’ve created a small shell script that listens to changes on etcd node and updates the firewall according to the rules and cluster nodes.

Update the firewall configuration


The firewall is configured through a fw.conf file, shared on a GlusterFS volume

#!/bin/sh
MD5_TARGET=md5sum /home/shared/configs/firewall/fw.conf | awk '{print $1}'
MD5_NEW=md5sum fw.conf | awk '{print $1}'
if [ "$MD5_TARGET" == "$MD5_NEW" ]; then
echo "Not config change"
exit 0
fi
cp fw.conf /home/shared/configs/firewall/
etcdctl set /cluster/firewall/update $MD5_NEW

The curl command with ?wait=true, will timeout only when the value is changed by the script above. It will then update the firewall on the host:

while :
do
curl -L http://127.0.0.1:4001/v2/keys/cluster/firewall/update?wait=true
NEW_HASH=etcdctl get /cluster/firewall/update
if [ "$NEW_HASH" != "$FW_HASH" ]; then
echo "Update the firewall"
source /usr/local/bin/firewall_builder
FW_HASH=$NEW_HASH
fi
done

Installing Docker Repository


To store container definition to be used with Kubernetes it is good to have your own container repository.

So let's use Kubernetes to deploy our first pod:

The default format is YAML but i prefer JSON myself.

Here is the docker-rc.json

{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "docker-repository",
"labels": {
"app": "docker-repository",
"version": "v1"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "docker-repository",
"version": "v1"
},
"template": {
"metadata": {
"labels": {
"app": "docker-repository",
"version": "v1"
}
},
"spec": {
"volumes": [
{
"name": "config",
"hostPath": { "path": "/home/shared/configs/docker" }
},{
"name": "data",
"hostPath": { "path": "/home/shared/docker" }
}
],
"containers": [
{
"name": "registry",
"image": "registry:2.2.1",
"volumeMounts": [{"name":"config", "mountPath":"/etc/docker/"},{"name":"data", "mountPath": "/var/lib/registry
"}],
"resources": {
"limits": {
"cpu": "100m",
"memory": "50Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
},
"ports": [
{
"containerPort": 5000
}
]
}
]
}
}
}
}

And here's the docker-svc.json

{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "docker-repository",
"labels": {
"app": "docker-repository"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
"app": "docker-repository"
},
"clusterIP": "10.0.0.204",
"ports": [
{
"protocol": "TCP",
"port": 5000,
"targetPort": 5000
}
]
}
}

Installing nginx Proxy


To be able to host several domains on your server the reverse proxy is needed. Installing nginx is simple, just beware of some headers to add:

In my case, the first host will be docker.loopingz.com

server {
listen 443;
server_name docker.loopingz.com;
access_log /var/log/nginx/docker.loopingz.com_access_log main;
error_log /var/log/nginx/docker.loopingz.com_error_log info;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
include /etc/nginx/conf.d/dev-auth;
proxy_pass http://10.0.0.204:5000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
}
ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!CAMELLIA;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/letsencrypt/live/docker.loopingz.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/docker.loopingz.com/privkey.pem;
}

You can see from this file that we use the IP 10.0.0.204 which is the cluster IP defined in the docker-svc.json. Kubernetes will map this IP to our Docker registry containers.

I want nginx to be deployed on all the nodes of the cluster to have a http/https entry point to the cluster from all the nodes. So let's define the replicationController of nginx.

{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "nginx",
"labels": {
"app": "nginx",
"version": "v1"
}
},
"spec": {
"replicas": 3,
"selector": {
"app": "nginx",
"version": "v1"
},
"template": {
"metadata": {
"labels": {
"app": "nginx",
"version": "v1"
}
},
"spec": {
"volumes": [
{
"name": "config",
"hostPath": { "path": "/home/shared/configs/nginx/" }
},{
"name": "logs",
"hostPath": { "path": "/home/shared/logs/nginx/" }
},{
"name": "certs",
"hostPath": { "path": "/home/shared/letsencrypt/" }
},{
"name": "static",
"hostPath": { "path": "/home/shared/nginx/" }
}
],
"containers": [
{
"name": "nginx",
"image": "nginx:latest",
"volumeMounts": [{"name": "static", "readOnly": true, "mountPath": "/var/www/"},{"name": "certs", "readOnly": true, "mountPath": "/etc/letsencrypt"}, {"name":"logs", "mountPath":"/var/log/nginx/"},{"name":"config", "readOnly": true, "mountPath":"/etc/nginx/conf.d/"}],
"resources": {
"limits": {
"cpu": "100m",
"memory": "50Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
},
"ports": [
{
"containerPort": 80,
"hostPort": 80
},{
"containerPort": 443,
"hostPort": 443
}
]
}
]
}
}
}
}

Let's Encrypt


To enable SSL on all the vhost, I now use let's encrypt which allows you to get a free SSL certificate for a period of 3 months. The ordering is automated, so all the nginx hosts have this in their configuration:

location /.well-known/acme-challenge {
add_header "Content-Type:" "application/jose+json" always;
root /etc/nginx/conf.d;
}

Now that you know all about installing Kubernetes, give it a try! Next, I will talk about benchmarking GlusterFS and creating new services, so stay tuned!


Tagged: Docker, GlusterFS, How to, kubernetes