We challenge ourselves to push technology forward at Nuxeo, and we do this by working in a highly-collaboratively fashion internally as well as with our customers. We believe this “open-kitchen” philosophy is how we make Digital Asset Management accessible and exciting!

In the first of three posts, I’m going to describe how a collaborative effort between our Customer Success, Engineering and the R&D teams is leading to notable successes and achievements here at Nuxeo.

After a thorough analysis, our Customer Success team concluded that a significant portion of our customer base were using Amazon Web Service (AWS) for storage. This was shared with our R&D team, who then recommended that more third-party services like AWS should be integrated directly into the Nuxeo platform.

Why is this exciting? I’m glad you ask.

Let’s assume a scenario where you are the leader of a department that needs to manage digital assets (videos, image files, etc) in a secure and fast way, but you don’t want to deal with any of the commonly-used infrastructure machines (e.g.: you prefer to use a Container Runtime, let’s say Elastic Container Service or the newly Elastic Kubernetes Service).

Currently Nuxeo’s Blob Management System is centralized, which means it needs to upload directly to Nuxeo Server, and then your organization needs to store it in a third-party Storage (let’s assume AWS S3).

But digital assets are always changing. Not only is the volume of content exploding, but the quality of your digital assets is increasing exponentially as well.

That means that most companies with existing Digital Asset Management (DAM) solutions need to quickly upload very large files ⎼ and these files are getting bigger and bigger, thus placing more demands on the DAM system. The takeaway here is that if you don’t have a DAM solution in place that can scale with your evolving needs and requirements, then you will continue to manage your digital assets in a less than optimal manner

The Value of Third-party Data Storage

With an architecture that takes advantage of 3rd Party Services (AWS, Azure, JClouds), Nuxeo makes it easier to manage files (Blobs).


Consider this scenario:

  1. A department leader (let’s call her Alice) uploads a file (Blob) to Nuxeo Server.
  2. Nuxeo Server provides AWS Temporary Credentials (STS) with Bucket and Base Bucket Key to upload.
  3. Alice uploads via CloudFront to take advantage of regional Edge Cache to speed up the upload.
  4. Alice tells Nuxeo Server that she uploaded File foo.mp4 with 3 GB to Bucket X with Key Y.
  5. Nuxeo Server Validates if the File exists and if the checksums are correct.
  6. Nuxeo Server moves the file from the Transient Store to the Persistent Store.

It will be as simple as these 5 steps!

Some of you may then ask, “What about the accessibility part?”

Nuxeo Batch Upload Handlers

Batch Upload Handlers will enable the customization of the File Upload Behaviour when given a certain Provider/Key (e.g.: returning custom data necessary for custom UI components to work).
This means that when customers use Nuxeo server with AWS, they can manage the uploading of content directly, without high-level expert knowledge.

By developing this new feature, we’re empowering our customer to easily scale up or down as the organisation requires. All the while, Nuxeo makes sure that the core business need, managing digital assets in a secure and efficient way, is always met.
We also applied a fundamental principle where “every problem is solved by adding another level of indirection”. That has lead us to create a defined architecture; the next step is to accommodate this feature technically.

The challenge is out there and we are working non-stop on this, so we’ll share more details about Batch Upload Handlers soon.

Stay tuned for my next post in this series coming up, and in the meantime, I look forward to hearing your feedback!