Physical Cluster Replication

On this page Carat arrow pointing down

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

Refer to the Known Limitations section for further detail.

New in v23.2: CockroachDB physical cluster replication continuously sends all data at the byte level from a primary cluster to an independent standby cluster. Existing data and ongoing changes on the active primary cluster, which is serving application data, replicate asynchronously to the passive standby cluster.

In a disaster recovery scenario, you can cut over from the unavailable primary cluster to the standby cluster. This will stop the replication stream, reset the standby cluster to a point in time where all ingested data is consistent, and mark the standby as ready to accept application traffic.


This is an enterprise-only feature. Request a 30-day trial license to try it out.

Use cases

You can use physical cluster replication in a disaster recovery plan to:

  • Meet your RTO (Recovery Time Objective) and RPO (Recovery Point Objective) requirements. Physical cluster replication provides lower RTO and RPO than backup and restore.
  • Automatically replicate everything in your primary cluster to recover quickly from a control plane or full cluster failure.
  • Protect against region failure when you cannot use individual multi-region clusters — for example, if you have a two-datacenter architecture and do not have access to three regions; or, you need low-write latency in a single region. Physical cluster replication allows for an active-passive (primary-standby) structure across two clusters with the passive cluster in different region.
  • Avoid conflicts in data after recovery; the replication completes to a transactionally consistent state as of a certain point in time.


  • Asynchronous byte-level replication: When you initiate a replication stream, it will replicate byte-for-byte all of the primary cluster's existing user data and associated metadata to the standby cluster asynchronously. From then on, it will continuously replicate the primary cluster's data and metadata to the standby cluster. Physical cluster replication will automatically replicate changes related to operations such as schema changes, user and privilege modifications, and zone configuration updates without any manual work.
  • Transactional consistency: You can cut over to the standby cluster at the LATEST timestamp or a point of time in the past or the future. When the cutover process completes, the standby cluster will be in a transactionally consistent state as of the point in time you specified.
  • Maintained/improved RPO and RTO: Depending on workload and deployment configuration, replication lag between the primary and standby is generally in the tens-of-seconds range. The cutover process from the primary cluster to the standby should typically happen within five minutes when completing a cutover to the latest replicated time using LATEST.
  • Cutover to a timestamp in the past or the future: In the case of logical disasters or mistakes, you can cut over from the primary to the standby cluster to a timestamp in the past. This means that you can return the standby to a timestamp before the mistake was replicated to the standby. You can also configure the WITH RETENTION option to control how far in the past you can cut over to. Furthermore, you can plan a cutover by specifying a timestamp in the future.
  • Monitoring: To monitor the replication's initial progress, current status, and performance, you can use metrics available in the DB Console and Prometheus. For more detail, refer to Physical Cluster Replication Monitoring.
  • Data verification on standby: You can verify that the data on the standby cluster matches that on the primary by checking the fingerprint of the data on each cluster. We recommend running data verification checks regularly as part of your monitoring processes. Refer to Data verification page for a guide and considerations.

Cutting over to a timestamp in the past involves reverting data on the standby cluster. As a result, this type of cutover takes longer to complete than cutover to the latest replicated time. The increase in cutover time will correlate to how much data you are reverting from the standby. For more detail, refer to the Technical Overview page for physical cluster replication.

Known limitations

  • Physical cluster replication is supported only on CockroachDB Self-Hosted in new v23.2 clusters. That is, clusters that have been upgraded from a previous version of CockroachDB will not support physical cluster replication.
  • Cockroach Labs supports physical cluster replication up to the following scale:
    • Initial data load: 2TB
    • Read maximum: 1000 reads per second
    • Write maximum: 850 writes per second
  • Read queries are not supported on the standby cluster before cutover.
  • The primary and standby cluster cannot have different region topology. For example, replicating a multi-region primary cluster to a single-region standby cluster is not supported. Mismatching regions between a multi-region primary and standby cluster is also not supported.
  • Cutting back to the primary cluster after a cutover is a manual process. Refer to Cut back to the primary cluster. In addition, after cutover, to continue using physical cluster replication, you must configure it again.
  • Before cutover to the standby, the standby cluster does not support running backups or changefeeds.
  • After a cutover, there is no mechanism to stop applications from connecting to the original primary cluster. It is necessary to redirect application traffic manually, such as by using a network load balancer or adjusting DNS records.
  • Large data imports, such as those produced by RESTORE or IMPORT, may dramatically increase replication lag.

Get started

This section is a quick overview of the initial requirements to start a replication stream.

For more comprehensive guides, refer to:

Start clusters

To initiate physical cluster replication on clusters, you must start the primary and standby CockroachDB clusters with the --config-profile flag. This enables cluster virtualization and sets up each cluster ready for replication.

The active primary cluster that serves application traffic:

cockroach start ... --config-profile replication-source

The passive standby cluster that will ingest the replicated data:

cockroach start ... --config-profile replication-target

The node topology of the two clusters does not need to be the same. For example, you can provision the standby cluster with fewer nodes. However, consider that:

  • The standby cluster requires enough storage to contain the primary cluster's data.
  • During a failover scenario, the standby will need to handle the full production load. However, the clusters cannot have different region topologies (refer to Limitations).

Every node in the standby cluster must be able to make a network connection to every node in the primary cluster to start a replication stream successfully. Refer to Copy certificates for detail.

Connect to the system interface and virtual cluster

A cluster with physical cluster replication enabled is a virtualized cluster; the primary and standby clusters each contain:

  • The system interface manages the cluster's control plane and the replication of the virtual cluster.
  • The virtual cluster manages its own data plane. Users connect to the virtual cluster that contains the application user data.

To connect to a cluster using the SQL shell:

  • For the system interface, include the options=-ccluster=system parameter in the postgresql connection URL:

    cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=system&sslmode=verify-full" --certs-dir "certs"
  • For the application virtual cluster, include the options=-ccluster=application parameter in the postgresql connection URL:

    cockroach sql --url "postgresql://root@{your IP or hostname}:26257/?options=-ccluster=application&sslmode=verify-full" --certs-dir "certs"

Physical cluster replication requires an Enterprise license on the primary and standby clusters. You must set Enterprise licenses from the system interface.

To connect to the DB Console and view the Physical Cluster Replication dashboard, the user must have the correct privileges. Refer to Create a user for the standby cluster.

Manage replication in the SQL shell

To start, manage, and observe physical cluster replication, you can use the following SQL statements:

Statement Action
CREATE VIRTUAL CLUSTER ... FROM REPLICATION OF ... Start a replication stream.
ALTER VIRTUAL CLUSTER ... PAUSE REPLICATION Pause a running replication stream.
ALTER VIRTUAL CLUSTER ... RESUME REPLICATION Resume a paused replication stream.
SHOW VIRTUAL CLUSTER Show the virtual clusters.
DROP VIRTUAL CLUSTER Remove a virtual cluster.

Cluster versions and upgrades

The standby cluster host will need to be at the same version as, or one version ahead of, the primary's application virtual cluster at the time of cutover.

To upgrade the primary and standby clusters, you must carefully and manually apply the upgrade. We recommend upgrading the standby cluster first. It is preferable to avoid a situation in which the application virtual cluster, which is being replicated, is a version higher than what the standby cluster can serve if you were to cut over.

Demo video

Learn how to harness Physical Cluster Replication to meet your RTO and RPO requirements with the following demo:

Yes No
On this page

Yes No