Reading Time: 7 minutes

At MuleSoft we have open source software deep in our DNA. We have hundreds of public projects there as well, and we have contributed to many open source projects including Node.js itself. We’re excited about Node.js and have several large, sophisticated Node.js projects in development. Our use of cutting edge Node.js features has resulted in both a lot of knowledge gained and, no surprise, a lot of pain experienced.

In this post, we’d like to share our experience developing part of our data access layer for one of our larger Node.js projects. We’ll start with an architectural overview, talk about modules in use, the problems we’ve found and how we worked with the open source community to resolve those issues, making an extremely powerful Node.js feature much more usable by everyone.

latest report
Learn why we are the Leaders in API management and iPaaS

This particular project relies on postgres for data persistence, and provides a number of clients, including an AngularJS user interface, with a rich set of APIs. We make use of RAML to define these APIs, with JSON schemas for entities. These entities have shapes that are hierarchical, unlike the tabular data stored by Postgres, and so a transformation layer separates our services from our database repositories: As believers in layered architectures and SOLID, we designed the system as shown in the somewhat contrived examples below. Note that all layers in the system operate asynchronously and return promises.

Controller Layer

controller layer pulls data from the HTTP request and passes it down to a service layer. For example:

Service Layer

A service layer returns entities that are of the shape specified by the JSON schema in the API’s RAML. Services make use of a simple mapping subsystem that knows how to reshape database rows into entities:

More sophisticated services pull data from multiple data sources, do filtering, validation, and other tasks.

Repository Layer

A repository layer provides apis that access the database, building queries, performing CRUD operations, and providing data integrity services such as transactions:

The DAL object in the code above is the deepest layer of the data system, and provides query creation and multi-tenancy services to the repositories.

Here is where it got interesting: We make use of knex for data access, and knex provides a reasonable implementation of transactions. Transaction need to be started by services: A service may want to insert two entities as a unit – a parent object and a child, for example. It will make a call to two different repositories to do so, and needs both repositories to enlist in the same transaction. However, only repositories may call knex code, in order to maintain a proper level of abstraction.

Domains

Enter domains

Domains are an extremely powerful, yet little known and poorly understood feature of Node.js. They provide a context in which code can execute and from which errors can be captured. We decided to make use of a domain, attached to an http-level API call, to store our transactions. This resulted in service layer transactional code looking like this:

transactor.run(function () {
    return self.insert(atom)
      .then(function (atom) {
        var electron = particleService.insertElectron(atom);
        return Promise.all([atom, electron]);
      }).spread(function (atom, electron) {
        atom.electrons = [electron];
        return atom;
      });
  });

The transact method looks like this:

This pattern is similar to the notion of Transaction Contexts in multi-threaded environments such as Java. It makes it extremely easy to see where transactions begin and end and relieves services from having to know when a particular piece of code is running independently and when it is running within a transaction: Knex kindly takes care of the details of connection reuse and issuing the proper SQL to the back end, and we use dependable to inject required modules at every step along the way.

It looks straighforward, but making all this actually work in practice was extremely difficult, as multiple underlying systems, from knex to Node.js itself, required contribution from our team to work together correctly in the presence of domains. A story we’ll tell in the next blog entry. Stay tuned.