How many users can the Nuxeo Platform handle? Is it 500? 1000? 5000? or more?! To find out we ran some tests and the results were mind-blowing! With an increasing demand for building consumer applications with the Nuxeo Platform, we are making sure we can add more content and serve more user requests - the two most important factors for scaling consumer applications. We have already set a benchmark of managing 1 Billion documents with a single server. Now, we are setting yet another one for the number of users handled by a Nuxeo application running on an affordable infrastructure. Here’s the story of how we took performance and scalability to a whole new level…..AGAIN!

Setting the Expectation

The most common metric used in our industry is the number of concurrent users. Unfortunately, it relies on many hypotheses (For example, the think time, the number of actions per session, etc) which varies widely between applications. We think this doesn’t make much sense. A more relevant approach is to use the number of requests processed per second. Most modern client apps, web or mobile, are built on top of a server API where one user action is equivalent to one or more requests. For example, google.com handles 40,000 requests per second. Everybody knows it’s huge and that makes it the perfect reference for comparison. If the Nuxeo platform can process a tenth of that number, say 4,000 requests per seconds, it would undoubtedly be a tremendous result.

Using Actual Customer Use Cases

Having a technical objective is great but that’s not as useful if it’s not close to an actual customer application scenario. What’s a typical consumer application? Well, it basically involves a user signing in the application, getting the most relevant information on the first page and then browsing content and performing search queries. That’s exactly the scenario we used for this benchmark. First, we quickly designed a web application using the Nuxeo JS client and configured the corresponding API server side using Nuxeo Studio. Then we wrote a benchmarck script with gatling.io to simulate as many users as possible using the app.

Infrastructure

The Nuxeo repository contains about 1 million contents and 40,000 user accounts. This is enough to make sure we don’t benchmark the caches. We ran the benchmark against a cluster of three c3.4xLarge AWS instances:

  • 16 vCPU
  • 32 GB RAM
  • 2x160GB SSD

On this hardware we installed:

  • Ubuntu 14.04
  • A PostgreSQL 9.3 server
  • 2 Nuxeo 6.0 instances
  • 3 Elasticsearch 1.4 instances

Results

First, we carried out a few runs with a single node hosting the PostgreSQL, Elasticsearch and Nuxeo. The results were already impressive - the nuxeo instance was processing 3000 requests per second! What’s even more amazing is that each request involves business logic being executed server side. Another encouraging fact was that the main limitation was CPU usage and that most of it came from Nuxeo and Elasticsearch - two components that can be distributed on several nodes. So, we started the two other nodes to obtain the architecture described earlier. After a few runs and some tweaking, the cluster delivered exceptional results. The platform serves up to 6000 requests per second with an average response time below 25 milliseconds! Here, we clearly see a linear increase in throughput when adding additional nodes to the cluster.

Conclusion and Next Steps

The results from this benchmark clearly show the capabilities of the Nuxeo platform to linearly scale out by adding processing nodes. This kind of scalability helps us address the most demanding requirements in terms of performance. This benchmark was also a great opportunity to identify a few areas of optimization. So this is only the beginning and you can expect even better results in the coming months!

Read more about Nuxeo on AWS in this whitepaper.