Microservice Musings : Databases

Most microservice applications depend on a DBMS (Database Management System) for persistence. Greater DBMS performance is achieved by scaling vertically or horizontally to create a DDBMS (Distributed DBMS).

Each type of DDBMS supports a different set of consistency, latency, availability and partition tolerance guarantees. As a DDBMS cannot simultaneously support them all tradeoffs are required. These influence the design, performance and availability of the dependent microservice application.

DDBMS Constraints

A DDBMS is limited in the set of the guarantees it can support.

CAP and PACELC Theorems

CAP theorem states that in the event of a partition which isolates one or more nodes in a DDBMS it is impossible to simultaneously support more than two of the following three guarantees:

In the context of CAP theorem, consistency refers to strong consistency - reads after writes always return the written values
Every request is serviced successfully without error
Partition Tolerance
Able to operate when an arbitrary number of network messages between nodes are dropped or delayed

In a distributed environment, partition tolerance must be supported as partitions will occur even in the most resilient environments, therefore the choice is to support either the strong consistency or the availability guarantee.

PACELC theorem builds on CAP theorem to consider the constraints that apply when a DDBMS is operating normally. It states that it is impossible to simultaneously support more than one of the following two guarantees:

As per CAP theorem above
Minimum Latency
The time taken to service a request. Also see Microservices > Service Time.

PACELC is an acronym for if there is a partition (P) how does the system trade off availability (A) and consistency (C); else (E), when the system is running normally in the absence of partitions, how does the system tradeoff latency (L) and consistency (C)?

Typically the chosen guarantee for when a system is running normally drives the choice between availability and consistency when there is a partition, either PAL or PCC. In a production worthy system, partitions are extremely rare so it is sensible to optimize for when the system is running normally, even at the cost of satisfying partition guarantees less efficiently.

A PAL system uses an eventual consistency strategy which propagates changes to distributed nodes asynchronously. When the system is running normally the time taken to achieve consistency can be short due to minimal latency, while when a partition occurs the availability guarantee ensures that consistency is eventually guaranteed, albeit without a guarantee of minimal latency.

The requirements of a DDBMS supporting ACID guarantees (Microservices > Transactions > ACID Transactions) require PCC. When a partition occurs the loss of availability is dealt with by the last part of the ACID acronym - durability guarantees consistency across outages.

Some systems choose PAC, favouring availability over consistency when a partition occurs, which ensures uptime but breaks the the consistency guarantee supported when the system is running normally. This is inappropriate for solutions that rely on strong consistency, such as a DDBMS offering strict ACID guarantees.

While possible, PCL offers no practical benefits.


The ability of a DDBMS to deliver greater performance by adding resources is constrained by the chosen PACELC guarantees, their implementation and additional guarantees, such as ACID.

There are two scalability dimensions:

Scaling up by allocating more performant resources to nodes
Scaling out by adding more nodes

As stated by the PACELC theorem and discussed in Microservices > Transactions > ACID Transactions, a DDBMS which guarantees strong consistency scales poorly horizontally. Vertical scaling is fine, but requires ever more esoteric processors at ever more esoteric prices until all options are exhausted.

An eventually consistent solution scales well in both vertical and horizontal dimensions, but given the wide availability of low cost commodity hardware horizontal scaling is the preferred route. Provisioning more nodes to satisfy increasing demand is simple and cost effective. Distribution is achieved by sharding data across the node set. Either the underlying distributed file system or replica nodes deliver high availability.


It has become common to classify a DBMS by the acronyms SQL (Structured Query Language) or NoSQL (Non SQL).


To be fully SQL compliant, a DBMS is required to support the SQL standard, the latest of which is ISO/IEC 9075-1:2016. SQL expresses queries using relational algebra which requires the DBMS to expose data through strongly consistent relational tables. A relational table consists of zero or more rows each containing a set of column values. The name, position, and type of each column is defined by a static schema. The relations between rows in tables is captured by keys that reference one or more column values. Primary keys uniquely identify a row in a table to which foreign keys in rows in other tables refer.

An emerging breed of NewSQL solutions are designed to scale horizontally using commodity hardware with architectures inspired by NoSQL solutions. Internally they often manage PACELC theorem PAL guarantees to achieve performant horizontally scalability, but ultimately queries must coalesce to deliver the strong consistency SQL requires.

Most RDBMSes support explicit ACID transaction boundaries - BEGIN TRANSACTION ... END TRANSACTION , though these are not part of the SQL standard, which is after all a query language.


The NoSQL acronym is generally attached to a DBMS that does not fully support the requirements for SQL compliance. Examples include:

Key Value
An associative array (aka Map or Dictionary) in which each unique key and associated value, referred to as a row, are arbitrarily complex. This is the most basic kind of data store on which many others are built.
Column Family (aka Wide Column)
An extension of a key value store where the associated value is a set of column families each containing a set of key value pairs each referred to as a column. Each row can contain zero or more column families and each column family can contain zero or more columns. Google Bigtable is an archetype.
Document Oriented
Optimized for storing semi-structured data encoded using a common serialization format such as XML, YAML or JSON. Typically implemented as an extension of a key value store.
Uses graph theory to capture relationships allowing the rapid navigation of complex hierarchical models. Typically implemented as an extension to a documented oriented store.

In a microservice architecture each service should depend on its own DBMS. This allows the life cycle of the application and the DBMS to be congruent and the selection of a DBMS which best fits the requirements of the application.

In addition to the range of possible data store organizations, NoSQL solutions vary in their PACECL theorem guarantees, PCC or PAL. The strong consistency provided by PCC solutions is constrained by the supported atomicity guarantee, which for many NoSQL solutions is limited to elements referenced by the same key - for example a row in Key Value and Column Family data stores. As explained in Microservices > Transactions > ACIDless Transactions, this behavior can be extended by a custom resource manager.


The chosen PACELC theorem guarantees determine many aspects of a microservice architecture - how operations are implemented, their atomicity and the DDBMS used. Above all these tradeoffs determine scalability and performance. This can only be fully evaluated using prototypes driven by workloads that reflect actual or projected use. However well intentioned, supplier benchmarks can never tell the full story. Some claims are at best spurious.

A great benefit of a microservice architecture is that "a one size fits all" solution is unnecessary as each microservice is free to use the set of solutions which "best fits". This freedom should be constrained to avoid overloading development and operations with too many discrete solutions.

A pragmatic approach is to characterize each microservice by its requirements. At a minimum, these will include service time and availability objectives, the minimum acceptable consistency guarantee and data volumes. The resultant matrix can then be mapped to a range of candidate solutions which can be further reduced to the minimum set of solutions that satisfies all requirements.

This may reveal some unwelcome truths, such as when processing petabytes of data and ACID compliance is required service time objectives are unattainable, or worse for those wishing to work with esoteric DDBMS technologies, that all requirements can be met by a plain old RDBMS.