Follow-the-Workload Topology

On this page Carat arrow pointing down
Warning:
CockroachDB v22.1 is no longer supported. For more details, see the Release Support Policy.

In a multi-region deployment, follow-the-workload is the default behavior for tables that do not have a table locality. In general, this is a good choice only for tables with the following requirements:

  • The table is active mostly in one region at a time, e.g., following the sun.
  • In the active region, read latency must be low, but write latency can be higher.
  • In non-active regions, both read and write latency can be higher.
  • Table data must remain available during a region failure.
Tip:

If read performance is your main focus for a table, but you want low-latency reads everywhere instead of just in the most active region, consider global tables or follower reads.

Note that if you start using the multi-region SQL abstractions for a database, CockroachDB will no longer provide the follow-the-workload behavior described on this page for that database.

Before you begin

Fundamentals

  • Multi-region topology patterns are almost always table-specific. If you haven't already, review the full range of patterns to ensure you choose the right one for each of your tables.
  • Review how data is replicated and distributed across a cluster, and how this affects performance. It is especially important to understand the concept of the "leaseholder". For a summary, see Reads and Writes in CockroachDB. For a deeper dive, see the CockroachDB Architecture Overview.
  • Review the concept of locality, which CockroachDB uses to place and balance data based on how you define replication controls.
  • Review the recommendations and requirements in our Production Checklist.
  • This topology doesn't account for hardware specifications, so be sure to follow our hardware recommendations and perform a POC to size hardware for your use case. For optimal cluster performance, Cockroach Labs recommends that all nodes use the same hardware and operating system.
  • Adopt relevant SQL Best Practices to ensure optimal performance.

Cluster setup

Each multi-region pattern assumes the following setup:

Multi-region hardware setup

Hardware

  • 3 regions
  • Per region, 3+ AZs with 3+ VMs evenly distributed across them
  • Region-specific app instances and load balancers
    • Each load balancer redirects to CockroachDB nodes in its region.
    • When CockroachDB nodes are unavailable in a region, the load balancer redirects to nodes in other regions.

Cluster startup

Start each node with the --locality flag specifying its region and AZ combination. For example, the following command starts a node in the west1 AZ of the us-west region:

icon/buttons/copy
$ cockroach start \
--locality=region=us-west,zone=west1 \
--certs-dir=certs \
--advertise-addr=<node1 internal address> \
--join=<node1 internal address>:26257,<node2 internal address>:26257,<node3 internal address>:26257 \
--cache=.25 \
--max-sql-memory=.25 \
--background

Configuration

Aside from deploying a cluster across three regions properly, with each node started with the --locality flag specifying its region and zone combination, this behavior requires no extra configuration. CockroachDB will balance the replicas for a table across the three regions and will assign the range lease to the replica in the region with the greatest demand at any given time (the follow-the-workload feature). This means that read latency in the active region will be low while read latency in other regions will be higher due to having to leave the region to reach the leaseholder. Write latency will be higher as well due to always involving replicas in multiple regions.

Follow-the-workload table replication

Note:

Follow-the-workload is also used by system ranges containing important internal data.

Characteristics

Latency

Reads

Reads in the region with the most demand will access the local leaseholder and, therefore, never leave the region. This makes read latency very low in the currently most active region. Reads in other regions, however, will be routed to the leaseholder in a different region and, thus, read latency will be higher.

For example, in the animation below, the most active region is us-east and, thus, the table's leaseholder is in that region:

  1. The read request in us-east reaches the regional load balancer.
  2. The load balancer routes the request to a gateway node.
  3. The gateway node routes the request to the leaseholder replica.
  4. The leaseholder retrieves the results and returns to the gateway node.
  5. The gateway node returns the results to the client. In this case, reads in the us-east remain in the region and are lower latency than reads in other regions.

Follow-the-workload topology reads

Writes

The replicas for the table are spread across all 3 regions, so writes involve multiple network hops across regions to achieve consensus. This increases write latency significantly.

For example, in the animation below, assuming the most active region is still us-east:

  1. The write request in us-east reaches the regional load balancer.
  2. The load balancer routes the request to a gateway node.
  3. The gateway node routes the request to the leaseholder replica.
  4. While the leaseholder appends the write to its Raft log, it notifies its follower replicas.
  5. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas.
  6. The leaseholders then return acknowledgement of the commit to the gateway node.
  7. The gateway node returns the acknowledgement to the client.

Follow-the-workload topology writes

Resiliency

Because this pattern balances the replicas for the table across regions, one entire region can fail without interrupting access to the table:

Follow-the-workload topology region failure

See also


Yes No
On this page

Yes No