Servers as Microservices with Node.js at Yieldbot

by Fatih Cetinkaya
July 14th, 2015

At Yieldbot, we handle billions of requests every month and Node.js is a fundamental part of our platform. We use it to render ads and content, create and serve APIs, serve web applications, log and monitor activity, as well as for reporting and analytics. To craft a more well-oiled digital machine, we spent a year breaking down our monolithic server architecture (primarily built using Python and Node.js) and evolved it into a more flexible, service-oriented architecture. By doing so were able to streamline at scale, significantly increasing our production deployments, decreasing the time it takes to fix bugs, and improving new product initiation times.

In the beginning, the first and likely best decision we made was choosing the Hapi Application and Service Framework as our underlying service architecture. This allowed us to redesign our application into a collection of cooperative, individually testable, microservices. We used Hapi to implement our new application. This ran our legacy application as a plugin within our new application and supported our existing stakeholders while we worked on our new design. It also provided a means to cut out shared service features into their own plugins.

Plugins as Components

Similar to how middleware libraries and the context of Hapi are used for extending server framework, Yieldbot’s new architecture uses Hapi plugins as components for building services. Additionally, we’ve been able to use plugins to isolate key business logic into well-defined, easily testable, deployable modules which we keep in a private npm registry hosted on JFrog ’s Artifactory.

Structure Of A Plugin

Each plugin is stored in its own npm package which contains all configuration files at the top-level, along with `/lib` and `/test` directories. Under the `/lib` directory, we typically create three files to hold business logic, data models, and routes.

At the top level, each plugin has a boilerplate `index.js` file which simply returns `name`, `version` properties as well as `register` functions. The register function checks dependencies and loads all of the files in the `/lib` directory. Hapi uses the ‘register’ function to add the plugin to the server instance.

Today, Yieldbot has more than thirty plugins providing low-level functionality, such as database access, proxying, caching, logging and monitoring, as well as higher-level business logic. We keep each plugin as a “npm” package for ease of use in services, and for automated deployment. When considering the addition of a new plugin, our primary guiding principle is that it must have a clear, logical set of responsibilities and must be independently buildable, testable and deployable.

Servers as Microservices

Plugins are hosted within Hapi servers, and since plugins are simply npm packages, we can easily compose them in different ways based on the requirements of the service.

One of the key ideas of Hapi plugins is that they are composable. This means that it is possible to write a bootstrapping module that loads and configures any number of plugins into a Hapi server instance. We could do this for each new service that we had, but this would be tedious after a while. So we have decided to use Hapi server composition library Glue to define our services with configuration instead of code. This library takes a JSON manifest describing the Hapi server configuration along with the plugins that should be loaded and what their configuration should be. It then performs the bootstrapping and creates a Hapi server object ready to be started.

Structure Of A Service

We use a standard boilerplate to lay out services. Each service has own npm package along with a “/config” directory to hold both development-time and run-time configuration files.

For example, our standard default.json configuration contains values shared among all of our services, such as server caching values as well as plugins that are generally used by all services and plugins, like our Logger, Monitor, and ElasticSearch plugins.

While Glue library works well at avoiding tedious and repeated bootstrapping code, it doesn’t actually run the server but rather builds a Hapi server object that is ready to be run. To actually run the server we use a service runner application that starts the server Glue built for us. Because we have multiple environments where our services run in (e.g. Local, Development, and Production) the configuration for the services and the plugins running be different in each of these environments. In order to support this requirement, we wrote our own Hapi server runner named Homestar (similar to Rejoice) that uses Glue to actually compose the server.

All of our services are independently deployable, isolated, and stand-alone and can be modeled around business domains.

Logging and Monitoring

Our Logger plugin hooks to events raised by the Hapi server to provide a single-logging API for all plugins within a server instance. This is a good example of how to address the “cross-cutting concern” issue related to logging.

Our Monitor plugin hooks Hapi’s request interface to provide real-time access to usage metrics. We’ve designed this plugin to push this information to a central “statsd” server used to collate and summarize access information. The Monitor plugin also provides a collection of routes that can be used to monitor the health of the server instance and its plugins. (See example below.) We use these routes during our automated deploy process to “smoke-test” server instances as well as for server heartbeat/uptime tests and failure alerts.

At the service level, we rely on our Logger plugin to produce well-structured log messages and Logstash to push them into an ElasticSearch in real-time. By doing so, we can monitor any data points our application provides such as service response times with running averages. Each service has status routes to provide information to Monitoring plugin.

Average Response and Server Load Time per host (ms)

Number of Requests Grouped by Status Code

Infrastructure

Each service has different business logic, thus calling for unique requirements. Most of the time, we use AWS c3.2xlarge (15GB / 8 vCPU) instances for our services. Each service runs using the strong-supervisor library in cluster mode with per worker / per CPU configuration.

Depending on requirements and scale, we run services behind of Nginx, HAProxy or ELB. Some services are distributed by region and availability zones while others run on a single instance. Regardless of distribution, each service has functional production and development instances.

We use Chef and two internal tools (called Dr Teeth and Electric Mayhem) for managing continuous integration (CI) and continuous delivery (CD) of servers and services within the stack. By leveraging a set of custom Chef LWRPs and cookbooks, we are able to manage services from A to Z, with all of this orchestrated by Jenkins workflows.

Future Plans

While we’ve made a significant amount of improvement over the last year, we’re not done yet. Currently, we’re working on upgrading to Node v0.12, Apache Mesos, Docker containers, and experimenting with other technologies like ReactiveX and Seneca. For more information on Yieldbot’s Open Source projects, click here.

Fatih Cetinkaya is a Yieldbot Engineer on the Platform team. Located in our Boston office, Fatih works to build, manage, evolve, and innovate Yieldbot Technology to make it better and smarter on behalf of our advertisers and publishers.

Contact Us

Download Case Study