High availability without giving up consistency

High availability without giving up consistency

If you’re reading this, you’re surely familiar with the arguments for high availability: services are only useful when they’re online. Unavailable services not only lose money, but also deteriorate your credibility in customers’ eyes. This could lead to immeasurable costs to your company in the future.

Given that CockroachDB got its name because of its ability to survive failures, we thought we would cover some architectural considerations when building high availability services on top of Cockroach.

Why Choose CockroachDB at All

Apps and web services have become deeply intertwined in our lives, so it’s natural that our expectations of them have dramatically increased. First and foremost, we always want them to be on––always. But that’s basically just become a requirement.

Secondly, we expect everything to “just work”. For an end user, this is a simple (albeit vague) requirement: everything should remain consistent. There are some places where this isn’t as important (the number of likes on a social media post), but there are others that are crucial for your users and their experiences. Their shopping carts shouldn’t lose items. Their reservations should be set in stone.

Accomplishing this, though, can be elusive for an infrastructure team. Ensuring that customers have services that are always on, while also guaranteeing that they behave exactly as users expect is something that historically posed a lot of challenges.

With CockroachDB, though, you’re able to develop with an incredibly reliable foundation. Using our Multi-Active Availability model, your cluster is guaranteed to always be totally consistent, while still tolerating failures.

Multi-Active Availability: The tl;dr Version

CockroachDB starts with the premise that you’re using a deployment across multiple machines (probably in a cloud environment)––this is the premise of all high availability models. From there, you’ll need to run at least 3 machines.

Why three?

To replicate data between nodes, CockroachDB relies on a the Raft consensus protocol––using it, we guarantee that data remains consistent by requiring a majority (or consensus) of replicas agree on the data’s current state.

So, the smallest number of nodes you can have which can achieve a consensus is three––and it turns out this third node is powerful. It not only powers consensus (and therefore consistency), but it also means that you can easily lose a node entirely without forcing the cluster to go down.

To tolerate more failures, you simply need to increase the number of replicas (as well as the number of machines their on).

CockroachDB’s architecture also lets any node serve data for the entire cluster, including data it doesn’t store. For those details, check out our Distribution Layer’s documentation.

The CockroachDB High Availability Recipe

With an understanding that CockroachDB must have a majority of nodes online to remain consistent and available, let’s look at what that means in practice.

Where is it OK to fail?

It’s important to first identify how large of a failure you want to tolerate. For example, you likely want to gracefully handle single machines failing––but what about an entire availability zone? Or an entire data center?

For some teams, the likelihood of an entire datacenter failing is low enough that they’re OK with their service going offline in that case. So, the largest element whose failure they want to handle is simply an availability zone.

Building a Robust Deployment

To survive the failure of the element you identified in the last section, you’ll need your CockroachDB cluster to be deployed across 3 availability zones. This way, if one AZ goes down, you still have 2 that are operational and your cluster remains active.

To make sure that your data gets evenly distributed across these 3 availability zones, though, you’ll need to use CockorachDB’s --locality flag to identify which node is in which availability zone. Here’s a quick example:

# Start your node in Us-East-1
cockroach start --locality=az=us-east-1 ...


# Start your node in Us-East-2
cockroach start --locality=az=us-east-2 ...

Once these nodes are started, CockroachDB automatically ensures that data is evenly distributed across availability zones, maximizing your ability to survive a failure.

...And Other Considerations

Unsurprisingly, there are actually many other considerations you need to make when creating a high availability service. While it’s easier with CockroachDB than with other databases, it’s still a long list of elements to take into consideration.

To make the task less daunting, we’ve created a guide: Building Highly Available & Consistent Services with CockroachDB. In it, we cover availability models, as well as tactical guidance to ensure your deployments can survive outages of any size––keeping your customers happy. You can check out the guide here.

If building a distributed SQL sytem from the ground up puts a spring in your step, then good news — we're hiring! Check out our open positions here.

Illustration by Christina Chung

Keep Reading

The path from beta to 1.0

A version of this blog post was originally published on May 1, 2017 and has been modified to provide the newest …

Read more
Kubernetes: The state of stateful apps

Over the past year, Kubernetes––also known as K8s––has become a dominant topic of …

Read more
CockroachDB 2.0 performance makes significant strides

[For CockroachDB's most up-to-date performance benchmarks, please read our Performance Overview page] …

Read more