Backup and Restore Overview

On this page Carat arrow pointing down

CockroachDB is built to be fault-tolerant with automatic recovery, but sometimes disasters happen. Backup and restore is an important part of a robust disaster recovery plan. CockroachDB Self-Hosted clusters provide a range of backup and restore features.

You can create full or incremental backups of a cluster, database, or table. Taking regular backups of your data is an operational best practice.

For a technical explanation of how a backup works, refer to the Backup Architecture page.

Backup and restore support

This table outlines the level of product support for backup and restore features in CockroachDB. See each of the pages linked in the table for usage examples:

Backup / Restore Description Self-hosted support
Full backup An un-replicated copy of your cluster, database, or table's data. A full backup is the base for any further backups.
  • Enterprise license not required
Incremental backup A copy of the changes in your data since the specified base backup (either a full backup or a full backup plus an incremental backup).
Scheduled backup A schedule for periodic backups.
Backups with revision history A backup with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp.
Point-in-time restore A restore from an arbitrary point in time within the revision history of a backup.
Encrypted backup and restore An encrypted backup using a KMS or passphrase.
Locality-aware backup and restore A backup where each node writes files to the backup destination that matches the node locality configured at node startup.
Locality-restricted backup execution A backup with the EXECUTION LOCALITY option restricts the nodes that can execute a backup job with a defined locality filter.

Additional backup and restore features

Scheduled backups

Tip:

We recommend using scheduled backups to automate daily backups of your cluster.

CockroachDB supports creating schedules for periodic backups. Scheduled backups ensure that the data to be backed up is protected from garbage collection until it has been successfully backed up. This active management of protected timestamps means that you can run scheduled backups at a cadence independent from the GC TTL of the data.

For detail on scheduled backup features CockroachDB supports:

Backup jobs with locality requirements

CockroachDB supports two backup features that use a node's locality to determine how a backup job runs or where the backup data is stored:

  • Locality-restricted backup execution: Specify a set of locality filters for a backup job in order to restrict the nodes that can participate in the backup process to that locality. This ensures that the backup job is executed by nodes that meet certain requirements, such as being located in a specific region or having access to a certain storage bucket.
  • Locality-aware backup: Partition and store backup data in a way that is optimized for locality. When you run a locality-aware backup, nodes write backup data to the cloud storage bucket that is closest to the node locality configured at node startup.

Backup and restore SQL statements

The following table outlines SQL statements you can use to create, configure, pause, and show backup and restore jobs:

SQL Statement Description
BACKUP Create full and incremental backups.
SHOW JOBS Show a list of all running jobs or show the details of a specific job by its job ID.
PAUSE JOB Pause a backup or restore job with its job ID.
RESUME JOB Resume a backup or restore job with its job ID.
CANCEL JOB Cancel a backup or restore job with its job ID.
SHOW BACKUP Show a backup's details at the backup collection's storage location.
RESTORE Restore full and incremental backups.
ALTER BACKUP Add a new KMS encryption key to an encrypted backup.
CREATE SCHEDULE FOR BACKUP Create a schedule for periodic backups.
ALTER BACKUP SCHEDULE Alter an existing backup schedule.
SHOW SCHEDULES View information on backup schedules.
PAUSE SCHEDULES Pause backup schedules.
RESUME SCHEDULES Resume paused backup schedules.
DROP SCHEDULES Drop backup schedules.

Backup storage

We recommend taking backups to cloud storage and enabling object locking to protect the validity of your backups. CockroachDB supports Amazon S3, Azure Storage, and Google Cloud Storage for backups. Read the following usage information:

  • Example file URLs to form the URL that you pass to BACKUP and RESTORE statements.
  • Authentication to set up authentication to a cloud storage bucket and include those credentials in the URL.

For detail on additional cloud storage features CockroachDB supports:

Backup and restore observability

You can verify that your stored backups are restorable with backup validation. While a successful restore completely validates a backup, the validation tools offer a faster alternative and return an error message if a backup is not valid. There are three "levels" of verifying backups that give increasing validation coverage depending on the amount of runtime you want to invest in validating backups.

See the Backup Validation page for detail and examples.

You can track backup jobs using metrics that cover scheduled backups, status of running jobs, and details on completed or failed jobs. You can alert on these metrics via the Prometheus endpoint or the Datadog integration.

See the Backup and Restore Monitoring page for product availability and a list of the available metrics.

Video demo

For practical examples of running backup and restore jobs, watch the following video:

See also


Yes No
On this page

Yes No