A year ago, we started building the foundation for nuxeo.io, our PaaS approach for the Nuxeo Platform. Over the past year, we have accomplished a lot and, at the same time, we learned some invaluable lessons. I will walk you through our implementation process, the problems we faced, how we fixed them, our accomplishments, and the next steps for 2015.
The original plan for nuxeo.io was to use a Docker based infrastructure across a set of hosts. To do that, we chose the CoreOS distribution that bundles some simple tools to handle service discovery, task scheduling and of course docker integration, amongst many others. We also decided to build our own dynamic reverse proxy : Gogeta. The global architecture looks like this diagram.
The first POC did a great job, and we quickly started Nuxeo containers in this infrastructure. You can take a look at the infrastructure in my previous blog post.
AWS stack as Catalyst
Starting a Nuxeo app server in a Docker container is indeed quite easy, but managing it and making it usable for the end user is harder. At the beginning our data was held in the container, which we planned to change because if the container stops, the data in it will be lost. One solution was to start a DB server in our cluster. But it’s already tough to manage our own application in a cluster and managing a clustered DB and a clustered binary store was definitely not the best solution. Therefore, we decided to rely on AWS services, such as S3 and Postgres RDS to store our data. Each time a new nuxeo.io environment starts, we provide an S3 bucket and a dedicated DB schema for it. With this solution, it’s much easier to handle the data lifecycle and not be affected by what happens if a host goes down.
We also use other pieces of AWS infrastructure, such as VPC to launch our host in a private network, Cloudformation with an autoscaling policy to ensure horizontal scaling, and ELB that balances the HTTP load across all hosts in the private cluster.
Lastly, we use greedy containers consuming several gigs of RAM. Since it’s not a micro-service architecture, launching hundreds of instances requires some big servers and scaling would mean a lot of those servers. This is not a financially viable option. So, in combination with Gogeta which logs the usage of the nuxeo.io instances, our passivator service stops unused environments and starts the passivated ones that need access. This way we are able to start only a few servers on AWS to handle hundreds of thousands of possible instances.
nuxeo.io went into production in August last year. The first use case we wanted to usenuxeo.io for was to provide trial platforms for Nuxeo. So, the last adjustments were made not only on the infrastructure but also on the application layer. This made sure that our customers’ user experience was pleasant and seamless from registration to the first access of their Nuxeo Cloud instance login page.
Interesting challenges and how we addressed them
The last month of 2014 was not very calm for us. We experienced an outage for our Nuxeo Cloud trial accounts which occurred after a CoreOS upgrade. This happened because the CoreOS scheduler (fleet) - which we didn’t update fast enough - couldn’t schedule the tasks we were launching, causing every container to run on one host only. When this host went down, the fleet scheduled all the jobs on another host putting a heavy load on it. This load was so intense that the host was unresponsive and was finally terminated by AWS.
The solution we adopted was putting some queuing strategy before starting big jobs, while relocating jobs. This made sure that they are relocated one by one. After the CoreOS upgrade, fleet now does a good job and schedules the tasks based on the number of units started on each host. This ensures that the load is shared over the whole cluster.
etcd Quorum when Downed Hosts
At the time of outage, we had only one host left which caused a problem with etcd - it couldn’t reach its quorum to elect a new leader. When etcd is broken, it means the cluster is broken.
We fixed it by creating a new etcd repository and restored a backup on it. After that outage, we decided to add another cluster host (to have a minimum of 4 hosts) in order to secure the etcd quorum. One of the features of etcd we are waiting for is a way to force a cluster’s node to start on its data so it can reconcile the cluster manually. It’s on their roadmap and will be soon released.
Tooling & Metrics
We wanted to measure the state of the cluster and act on it quickly. For this, we developed a new tool (arkenctl) which now allows us to manipulate the logical artifacts of the cluster - a nuxeo.io instance, a domain and being able to rapidly get the status with the same rules as gogeta (using a common library). This tool also watches the cluster to find services that have errors. It manages metrics about the status and sends them to Datadog allowing us to set up alerts when a service is stuck. Datadog sends notifications to Slack and this helps us stay alert when things go south. Datadog offers us several ways to show the data (as a counter, graph, etc). In the near future, we will surely extend the data we send to it, for example including a start/stop event.
The library exposes a logical layer to our tools like gogeta, the passivator, and arkenctl, allowing us to transparently change the implementation underneath.
Looking at 2015
2015 had a great start and it looks promising! After reviewing the past year, I think we’ve made some good choices about the infrastructure. It works well, AND it can evolve! For example, for the use of hosted Nuxeo instances, we have deployed an Elasticsearch cluster on it without much changes. In the future, we will be able to launch other services on it, mixing nuxeo.io instance, Elasticsearch nodes or transformation services.
Currently, we are working towards several goals. They are:
- Continue to strengthen the infrastructure to make it rock solid by adding metrics that allow us to anticipate things, by deploying clustered nuxeo instances, etc.
- Add some provisioning features to the nuxeo instances, in order to start some ready-to-run samples with pre-filled data.
- Enlarge the services we provide by leveraging our containers’ knowledge and provide some Transform as a service (TaaS) tools.
If you are interested and want to see this in action, register for an Online Trial and get started!