DASH: How to evaluate Kubernetes-native databases

DASH: How to evaluate Kubernetes-native databases

RoachFest '23

A CockroachDB customer conference

Visit NYC

Cloud-native application architectures help developers deliver amazing experiences to their customers around the world. They do this by taking advantage of billions in cloud provider investments, which provide nearly unlimited and on-demand resources spread across hundreds of data centers globally.

Kubernetes – the Google-built open source container orchestration system – is has become a ubiquitous tool for deploying, running, and scaling these applications. Kubernetes simplifies and supercharges application delivery if (and this is a big “if”) those applications are architected to take advantage of the resources available in cloud environments. Certain certain issues remain that can make managing stateful applications difficult. Simply put: storage on Kubernetes is not a solved problem.

This is highly dependent, however, on what kind of database you use alongside Kubernetes, and whether you select a database that is Kubernetes-native. Traditional relational databases that give life — or state — to any application were not architected to take full advantage of the resources available in the cloud. On the other side, NoSQL databases fail to thrive in Kubernetes-orchestrated environments. A new breed of relational databases – Distributed SQL databases – have emerged over the past few years and are able to take full advantage of the cloud and the operational benefits provided by Kubernetes. 

Unless your database is compatible with Kubernetes, you will not be able to take full advantage of the orchestration magic it renders possible. In this post, I will introduce the DASH Framework to help you answer the question: what makes a database Kubernetes-native? We will walk through basic architectural principles of Kubernetes, examine how they interact with the database, and cover the four principles you need to expect from a Kubernetes-native database: Disposability, API Symmetry, Shared-Nothing, and Horizontal Scaling. Along the way, we will evaluate NoSQL, traditional relational databases, and Distributed SQL solutions against this new rubric, and provide a framework to evaluate which are the best fit for cloud-native architectures. 

What is the DASH Framework for Kubernetes?

DASH is a framework to think about Kubernetes-native operations, and help evaluate how Kubernetes-native any database is. DASH stands for the four Kubernetes-native operations a distributed database must provide in order to truly work with Kubernetes: Disposability, API Symmetry, Shared Nothing, and Horizontal Scaling. The rest of this guide will uncover what these principles mean, and whether or not they work with traditional relational databases, NoSQL databases, and Distributed SQL databases. 

DASH Principles 

Disposability: Losing things should be a non-event. Because a Kubernetes-native database can handle failures, any disruptions are entirely hidden from the client.

API Symmetry: Every server in the network provides the same answer to the same question. Though they are distributed, Kubernetes-native databases act as a single logical database with the consistency guarantees of a single-machine system.

Shared-Nothing: True Kubernetes-native database can operate without any centralized coordinator or single point of failure. 

Horizontal Scaling: Scale out, not up.

To make these concepts concrete, I’ll use examples of my favorite DASH database, CockroachDB (Google Evangelist Kelsey Hightower also thinks it is a great example), but you should keep in mind that these concepts extend, at varying degrees, to other Distributed SQL databases.

D: Disposability: Failure Must Be a Non-Event

Kubernetes Database Quality: Failure must be a non-event

Everything fails eventually. No matter how much abstraction you layer on top of it, we’re working with machines at the end of the day, and machines crash.

Disposability is the ability of a database to handle failures, or processes stopping, starting, or crashing with little-to-no notice. Disposability enables a database to be resilient, able to to survive the failure of any piece of hardware yet still provide access to the database with limited or no impact on query performance. It is bulletproof, always on and always available, and will avoid any single point of failure.

[ blog ]

How we built a serverless SQL database

read blog →

This is particularly important in a Kubernetes environment, because Kubernetes pods are mortal by design. The Kubernetes Scheduler plays an important role in managing disruption incidents. 

What is the Kubernetes Scheduler?

The Kubernetes scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler finds the best Node for that Pod to run on, based on configurable scheduling principles. 

The Kubernetes Scheduler uses a set of rules for determining which pods (small groupings of containers that are always scheduled as a unit) run on which machines. Once pods are scheduled, they remain on those machines until some sort of disruption occurs due to voluntary (i.e. scaling in or upgrading) or involuntary (e.g. hardware failure or operating system kernel panic) factors.

When disruptions occur, Kubernetes reschedules pods to more suitable nodes. In NoSQL databases like MongoDB and Distributed SQL databases like CockroachDB, this is a non-event. But in traditional relational databases, a Kubernetes pod rescheduling itself can result in inconsistent data, because of the way the database handles failover.

Disruptions like these are a significant problem for legacy relational databases, because they typically have a single machine powering them at any given time. For production deployments these databases may send updates asynchronously to a second instance that will take the lead in the event the primary machine goes down. However, in practice the process of actually failing over is difficult to do well. Github recently described the perils of executing a MySQL failover during a major outage, which resulted in out of date and inconsistent data. If the systems powering legacy relational databases fail, users will likely know about it.

Both Distributed SQL and NoSQL databases typically have the Disposability property, as they were designed to thrive in ephemeral cloud environments where virtual machines could be restarted or rescheduled at a moment’s notice. Databases with this factor should be able to survive node failures with no data loss and no downtime.

For example, CockroachDB is able to survive machine loss by maintaining three consistent replicas of any piece of data among the nodes in a cluster; writes are confirmed to clients as soon as the majority of replicas acknowledge the action. In the event that a machine containing a given replica is lost, CockroachDB can still serve consistent reads and writes, while simultaneously creating a third replica of that data elsewhere in the cluster to ensure it can survive future machine failures.

Keep in mind that disposability isn’t just about surviving individual machine failures. DASH databases should be able to extend the concept of disposability to entire data centers or even data center regions. This type of real-world failure should be a non-event; these capabilities would help avoid issues like the ones encountered by Wells Fargo, where “smoke in the data center” resulted in a global outage.

A: API Symmetry: Every Server Should Provide the Same Answer to the Same Question

API Symmetry in a Kubernetes Database

In a distributed system, each node serves as a point of entry to the data stored within. But depending on your database’s consistency model, those nodes do not always give the same answer. The API Symmetry Principle demands that when given the same request, every server in the system returns the same answer. A Kubernetes-native database must implement and enforce serializable isolation so that all transactions are guaranteed consistent.

This is especially important in Kubernetes because of the way Kubernetes Services are configured. Kubernetes uses Services to allow clients to address a group of identical processes as a whole through a convenient DNS entry. This way applications don’t need to know about the many instances that power a frontend; there could be one backing server, or there could be hundreds. Services are essentially defined as rules that say any pods with a given collection of labels should receive requests sent to this service (assuming their health checks and readiness probes say it is OK to send traffic).

What are Kubernetes Services?

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). By using Services, you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Why does Kubernetes take this approach? By decoupling the pod from the address of the service with which it is associated, we can scale without disrupting existing application instances. This is possible because of API symmetry, meaning the pods included in the group all have the same API and provide consistent responses, regardless of which instance is chosen by the Kubernetes Service. For stateless services, this is simple: The logic in each service is the same, so sending a request to any service at a given time will always yield the same result.

Note that for a database to have API symmetry, the underlying data must also be strongly consistent. If you get different answers depending on which node you are routed to, the abstraction provided by Kubernetes is broken, and that leads to bug-causing complexity for application developers.

This is actually an area where traditional relational databases perform better than NoSQL systems.

When all queries are being sent to one machine, you get the same result every time. In NoSQL systems, you need to make a tradeoff between Disposability and API Symmetry. [This scenario closely mirrors tradeoffs inherent in the CAP theorem: that a database must choose between Consistency (API Symmetry) and Disposability (Availability).] When asynchronous replication comes into play (either for high availability or to support performance improvements like read replicas), the API symmetry is violated, since the master becomes the source of truth where the replicas would be slightly out of sync. With NoSQL systems, different machines might give different responses to the same query. 

In the Distributed SQL world, techniques like consensus replication allow databases to provide API symmetry without sacrificing Disposability.

By using API symmetry alongside Kubernetes Service Objects, you can create a single logical database with the consistency guarantees of a single machine database (even if there are actually dozens or hundreds of nodes at work behind the scenes powering the database). 

Let’s go back to our CockroachDB example: any CockroachDB node can handle any request. If the node that receives a load-balanced request happens to have the data it needs locally, it will respond immediately. If not, rather than sending an error back to the client (or providing a stale result), that node will become a gateway and will forward the request to the appropriate nodes behind the scenes to get the correct answer. From the requester’s point of view, every node is exactly the same. 

S: Shared-Nothing Architecture: Eliminate Single Points of Failure

Shared Nothing in Kubernetes Database

The cloud should not have a maintenance window. True cloud-native services should have the capability to be always on. This means removing any single points of failure.

The Shared-Nothing property dictates that a database should be able to operate without any centralized coordinator or single point of failure. In the stateless world, this goes hand in hand with disposability. When state is involved, this concept becomes an additional consideration.

Traditional relational databases are notorious for having single points of failure. This extends even to modern RDBMS systems, like Amazon Aurora. Even some Distributed SQL databases rely on special coordinators to keep track of all the bookkeeping required to build a globally-distributed system. What this means is that you can have architectures that can survive certain workers being disposed, but if you take down the coordinator process, if the entire system doesn’t go offline in many cases the configuration state will be frozen and certain types of critical operations – potentially those required to debug the issue – will fail.

Cloud-native databases should be able to survive in a world where any node can fail, not just “any node except for our special master node that coordinates everything.” For example, CockroachDB has no master process— this is what gives it its eponymous survivability characteristics.

Although each CockroachDB database server is stateful, it only relies on the state for which it is responsible (though it does cache some knowledge that it has gleaned about the cluster through communicating with other nodes via a peer-to-peer network). CockroachDB nodes do not rely on any authoritative source to say what they should be doing at any given point in time. Shared-nothing architectures for stateful systems allow both ultra-high availability and ease of operations.

\[Author note: Interestingly enough, Kubernetes itself is not a shared-nothing system. It has a single-region control plane that, if destroyed, will compromise the cluster. Operators can create Distributed SQL database clusters that survive more than Kubernetes’ control plane nodes by spanning Kubernetes clusters across regions or even cloud providers.]

H: Horizontal Scaling: Scale-Out Rather Than Up

Horizontal Scaling in a Kubernetes Database

The last factor required of a Kubernetes-Native database is horizontal scalability. Similar to the way this term was used in the Twelve-Factor App, this means if you want more throughput, you simply add more processes. Kubernetes Controllers and Schedulers combine to make horizontal scaling an easy, declarative process: The cloud promises infinite scale and a Kubernetes-native database needs to simplify utilizations of these resources without causing any additional operational overhead. It should automate and deliver effortless scale.

While horizontal scaling is a fundamental benefit of many NoSQL systems, traditional relational databases do not do this well. They rely instead on sharding—with or without the help of systems like Vitess—to accomplish this use case. What this means for traditional RDMS systems in cloud-native environments is that if you need more power, you have to buy a more expensive machine and incur downtime, or you have to dramatically increase your operational overhead by splitting your database into many pieces that cannot easily talk to each other. For teams that rely on vertical scaling, this means there is a natural limit to how powerful a relational database can get—at the end of the day, it is limited to what can be powered by a single server.

Distributed SQL databases take a page from NoSQL systems and scale by adding more machines. For example, a single Kubernetes command can scale out CockroachDB by provisioning new resources and spinning up additional pods with no downtime; the Kubernetes load balancer will recognize the new database capacity and automatically start routing requests among the new instances. Each node can independently process requests while also taking part in helping other nodes when it comes to completing tasks like processing complex queries by breaking them up into smaller bits of work that can be completed in parallel. Kubernetes has features to help scale out stateful services by doing things like providing pods with predictable network identities to facilitate service discovery among the cluster instances.

A bridge to the cloud for relational databases

DASH provides a necessary framework for evaluating whether a database delivers a truly Kubernetes-native architecture.

  • Disposability enables resilience, ensuring your stateful systems can survive when ephemeral cloud resources cease to exist. 
  • API symmetry enables consistency, allowing distributed databases to always provide the up-to-date answer, no matter which process is handling the client request.
  • Shared nothing properties enable your database to make forward progress without any centralized master or coordinator.
  • Horizontal scalability allows the database to take advantage of the unlimited and on-demand resources available in the cloud.

Kubernetes-native databases give IT teams an automated database that operates as an always-on, elastic data layer that adds the missing cloud-native foundation to their stacks.

Distributed SQL: DASH-Complete, Kubernetes-Native Database Architecture

NoSQL, Distributed SQL, and traditional relational databases make different architectural tradeoffs. But when it comes to Kubernetes compatibility, there’s a clear winner. Distributed SQL databases like CockroachDB were built from the ground up to work out-of-the box with Kubernetes and other microservices. 

Running a traditional relational database on Kubernetes is a challenge and typically most organizations will just run it alongside the platform to simplify operations. However, this often creates a bottleneck or worse, a single point of failure for the application – a violation of the DASH framework. Running a NoSQL database is better aligned, but can still cause data consistency issues. Both traditional relational and NoSQL databases require complex operators to help manage these databases in the environment as they simply were not built with the same architectural primitives.

A Distributed SQL database like CockroachDB allows you to deploy a relational database seamlessly on top of Kubernetes so you can gain the advantage of all its benefits across your entire application.

Originally published at The New Stack.

About the author

Nate Stewart

Nate is the Chief Product Officer at Cockroach Labs. Prior to Cockroach Labs, he built the NYC product management organization for the venture-backed marketing SaaS company, Percolate. Outside of product, his experience spans engineering and business. Nate was a lead backend software developer at Bloomberg L.P. and received his MBA from MIT.