Upgrade to CockroachDB v23.2

On this page Carat arrow pointing down

Because of CockroachDB's multi-active availability design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations.

This page describes how to upgrade to the latest v23.2 release, v23.2.4. To upgrade CockroachDB on Kubernetes, refer to single-cluster or multi-cluster instead.

Terminology

Before upgrading, review the CockroachDB release terminology:

  • A new major release is performed every 6 months. The major version number indicates the year of release followed by the release number, which will be either 1 or 2. For example, the latest major release is v23.2 (also written as v23.2.0).
  • Each supported major release is maintained across patch releases that fix crashes, security issues, and data correctness issues. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of v23.2 use the format v23.2.x.
  • All major and patch releases are suitable for production usage, and are therefore considered "production releases". For example, the latest production release is v23.2.4.
  • Prior to an upcoming major release, alpha and beta releases and release candidates are made available. These "testing releases" are not suitable for production usage. They are intended for users who need early access to a feature before it is available in a production release. These releases append the terms alpha, beta, or rc to the version number.
Note:

There are no "minor releases" of CockroachDB.

Step 1. Verify that you can upgrade

Warning:

In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the master branch cannot subsequently be upgraded to a production release.

Run cockroach sql against any node in the cluster to open the SQL shell. Then check your current cluster version:

icon/buttons/copy
> SHOW CLUSTER SETTING version;

To upgrade to v23.2.4, you must be running either:

  • Any earlier v23.2 release: v23.2.0-alpha.1 to v23.2.3.

  • A v23.1 production release: v23.1.0 to v23.1.19.

If you are running any other version, take the following steps before continuing on this page:

Version Action(s) before upgrading to any v23.2 release
Pre-v23.2 testing release Upgrade to a corresponding production release; then upgrade through each subsequent major release, ending with a v23.1 production release.
Pre-v23.1 production release Upgrade through each subsequent major release, ending with a v23.1 production release.
v23.1 testing release Upgrade to a v23.1 production release.

When you are ready to upgrade to v23.2.4, continue to step 2.

Step 2. Prepare to upgrade

Before starting the upgrade, complete the following steps.

Review breaking changes

Review the backward-incompatible changes, deprecated features, and key cluster setting changes in v23.2. If any affect your deployment, make the necessary changes before starting the rolling upgrade to v23.2.

Check load balancing

Make sure your cluster is behind a load balancer, or your clients are configured to talk to multiple nodes. If your application communicates with a single node, stopping that node to upgrade its CockroachDB binary will cause your application to fail.

Check cluster health

Verify the overall health of your cluster using the DB Console:

  • Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as SUSPECT or DEAD, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there are DEAD and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).

  • Under Replication Status, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.

  • In the Node List:

    • Make sure all nodes are on the same version. If any nodes are behind, upgrade them to the cluster's current version first, and then start this process over.
  • In the Metrics dashboards:

    • Make sure CPU, memory, and storage capacity are within acceptable values for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. If any of these metrics is above healthy limits, consider adding nodes to your cluster before beginning your upgrade.

Check decommissioned nodes

If your cluster contains partially-decommissioned nodes, they will block an upgrade attempt.

  1. To check the status of decommissioned nodes, run the cockroach node status --decommission command:

    icon/buttons/copy
    cockroach node status --decommission
    

    In the output, verify that the value of the membership field of each node is decommissioned. If any node's membership value is decommissioning, that node is not fully decommissioned.

  2. If any node is not fully decommissioned, try the following:

    1. First, reissue the decommission command. The second command typically succeeds within a few minutes.
    2. If the second decommission command does not succeed, recommission and then decommission it again. Before continuing the upgrade, the node must be marked as decommissioned.

Back up cluster

Because CockroachDB is designed with high fault tolerance, backups are primarily needed for disaster recovery. However, taking regular backups of your data is an operational best practice. When upgrading to a major release, we recommend taking a backup of your cluster. See our support policy for restoring backups across versions.

Step 3. Decide how the upgrade will be finalized

Note:

This step is relevant only when upgrading from v23.1.x to v23.2. For upgrades within the v23.2.x series, skip this step.

By default, after all nodes are running the new version, the upgrade process will be auto-finalized. This will enable certain features and performance improvements introduced in v23.2. However, it will no longer be possible to roll back to v23.1 if auto-finalization is enabled. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the previous binary and then restore from one of the backups created prior to performing the upgrade. For this reason, we recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in step 5:

  1. Upgrade to v23.1, if you haven't already.

  2. Start the cockroach sql shell against any node in the cluster.

  3. Set the cluster.preserve_downgrade_option cluster setting:

    icon/buttons/copy
    SET CLUSTER SETTING cluster.preserve_downgrade_option = '23.1';
    

    It is only possible to set this setting to the current cluster version.

Features that require upgrade finalization

When upgrading from v23.1 to v23.2, certain features and performance improvements will be enabled only after finalizing the upgrade, including but not limited to:

  • The coalescing of storage ranges for each table, index, or partition (collectively referred to as "schema objects") into a single range when individual schema objects are smaller than the default configured maximum range size (controlled using zone configs, specifically the range_max_bytes parameter). This change improves scalability with respect to the number of schema objects, since the underlying range count is no longer a potential performance bottleneck. After finalizing the upgrade to v23.2, you may observe a round of range merges and snapshot transfers. To disable this optimization, before finalizing the upgrade, set the spanconfig.storage_coalesce_adjacent.enabled cluster setting to false. See the v23.1 release notes for SHOW RANGES for more details. #102961
  • The new output log format, which allows configuration of a time zone in log output. Before configuring a time zone, the cluster must be finalized on v23.2. #104265
  • Performance improvements when a node reclaims disk space. #106177
  • The following admission control mechanisms, which help to maintain cluster performance and availability when some nodes experience high load:
    • Delete operations
    • Replication
    • #98308
    • Collecting a statement diagnostic bundle for a particular plan. The existing fingerprint-based matching has been extended to also include plan-gist-based matching and "anti-matching" (collecting a bundle for any plan other than the provided plan gist). #105477
    • A new system table, system.region_liveness, that tracks the availability and the timestamp of the latest unavailability for each cluster region. #107903
    • The ability of a WaitPolicy_Error request to push the timestamp of a transaction with a lower priority. #108190
    • Configuring a changefeed with the lagging_ranges_threshold or lagging_ranges_polling_interval changefeed options. #110649
    • Removal of the upgrade step grantExecuteToPublicOnAllFunctions, which is no longer required because post-serialization changes now grant EXECUTE on functions to the public role. #114203
    • A fix to a bug that could allow a user to execute a user-defined function without the EXECUTE privilege on the function. If a user does not have the privilege, the user-defined function does not run and an error is logged. #114203

    For more details about a given feature, refer to the CockroachDB v23.2.0 release notes.

    Step 4. Perform the rolling upgrade

    Tip:

    Cockroach Labs recommends creating scripts to perform these steps instead of performing them manually.

    Follow these steps to perform the rolling upgrade. To upgrade CockroachDB on Kubernetes, refer to single-cluster or multi-cluster instead.

    For each node in your cluster, complete the following steps. Be sure to upgrade only one node at a time, and wait at least one minute after a node rejoins the cluster to upgrade the next node. Simultaneously upgrading more than one node increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability.

    Warning:

    After beginning a major-version upgrade, Cockroach Labs recommends upgrading all nodes as quickly as possible. In a cluster with nodes running different major versions of CockroachDB, a query that is sent to an upgraded node can be distributed only among other upgraded nodes. Data accesses that would otherwise be local may become remote, and the performance of these queries can suffer.

    These steps perform an upgrade to the latest v23.2 release, v23.2.4.

    1. Drain and shut down the node.

    2. Visit What's New in v23.2? and download the CockroachDB v23.2.4 full binary for your architecture.

    3. Extract the archive. In the following instructions, replace {COCKROACHDB_DIR} with the path to the extracted archive directory.

    4. If you have a previous version of the cockroach binary in your $PATH, rename the outdated cockroach binary, and then move the new one into its place.

      If you get a permission error because the cockroach binary is located in a system directory, add sudo before each command. The binary will be owned by the effective user, which is root if you use sudo.

      icon/buttons/copy
      i="$(which cockroach)"; mv "$i" "$i"_old
      
      icon/buttons/copy
      cp -i {COCKROACHDB_DIR}/cockroach /usr/local/bin/cockroach
      
    5. If a cluster has corrupt descriptors, a major-version upgrade cannot be finalized. In CockroachDB v23.2 and above, automatic descriptor repair is enabled by default. After restarting each cluster node on v23.2, monitor the cluster logs for errors. If a descriptor cannot be repaired automatically, contact support for assistance completing the upgrade. To disable automatic descriptor repair (not generally recommended), set the environment variable COCKROACH_RUN_FIRST_UPGRADE_PRECONDITION to false.

    6. Start the node so that it can rejoin the cluster.

      Without a process manager like systemd, re-run the cockroach start command that you used to start the node initially, for example:

      icon/buttons/copy

      cockroach start \
          --certs-dir=certs \
          --advertise-addr={node address} \
          --join={node1 address},{node2 address},{node3 address}
      

      If you are using systemd as the process manager, run this command to start the node:

      icon/buttons/copy
      systemctl start {systemd config filename}
      

      Re-run the cockroach start command that you used to start the node initially, for example:

      icon/buttons/copy

      cockroach start \
          --certs-dir=certs \
          --advertise-addr={node address} \
          --join={node1 address},{node2 address},{node3 address}
      

    7. Verify the node has rejoined the cluster through its output to stdout or through the DB Console.

    8. If you use cockroach in your $PATH, you can remove the previous binary:

      icon/buttons/copy
      rm /usr/local/bin/cockroach_old
      

      If you leave versioned binaries on your servers, you do not need to do anything.

    9. After the node has rejoined the cluster, ensure that the node is ready to accept a SQL connection.

      Unless there are tens of thousands of ranges on the node, it's usually sufficient to wait one minute. To be certain that the node is ready, run the following command:

      icon/buttons/copy
      cockroach sql -e 'select 1'
      

      The command will automatically wait to complete until the node is ready.

    10. Repeat these steps for the next node.

    Step 5. Roll back the upgrade (optional)

    If you decide to roll back to v23.1, you must do so before the upgrade has been finalized, as described in the next section. It is always possible to roll back to a previous v23.2 version.

    To roll back an upgrade, do the following on each cluster node:

    1. Perform a rolling upgrade, as described in the previous section, but replace the upgraded cockroach binary on each node with the binary for the previous version.
    2. Restart the cockroach process on the node and verify that it has rejoined the cluster before rolling back the upgrade on the next node.
    3. After all nodes have been rolled back and rejoined the cluster, finalize the rollback in the same way as you would finalize an upgrade, as described in the next section.

    Step 6. Finish the upgrade

    Note:

    This step is relevant only when upgrading from v23.1.x to v23.2. For upgrades within the v23.2.x series, skip this step.

    If you disabled auto-finalization in step 3, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the previous binary.

    Once you are satisfied with the new version:

    1. Run cockroach sql against any node in the cluster to open the SQL shell.

    2. Re-enable auto-finalization:

      icon/buttons/copy
      > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
      
      Note:

      All schema change jobs must reach a terminal state before finalization can complete. Finalization can therefore take as long as the longest-running schema change. Otherwise, the amount of time required for finalization depends on the amount of data in the cluster, as it kicks off various internal maintenance and migration tasks. During this time, the cluster will experience a small amount of additional load.

    3. Check the cluster version to confirm that the finalize step has completed:

      icon/buttons/copy
      > SHOW CLUSTER SETTING version;
      

      When the upgrade has been finalized, the cluster will report that it is on the new version. If the cluster continues to report that it is on the previous version, finalization has not completed. When finalization is in progress but has not yet finished, the output still shows the previous major version, but may include additional details about the finalization progress. If auto-finalization is enabled but finalization has not completed, check for the existence of decommissioning nodes where decommission has not finished. If you have trouble upgrading, contact Support.

    After the upgrade to v23.2 is finalized, you may notice an increase in compaction activity due to a background migration within the storage engine. To observe the migration's progress, check the Compactions section of the Storage Dashboard in the DB Console or monitor the storage.marked-for-compaction-files time-series metric. When the metric's value nears or reaches 0, the migration is complete and compaction activity will return to normal levels.

    Tip:

    By default, the storage engine uses a compaction concurrency of 3. If you have sufficient IOPS and CPU headroom, you can consider increasing this setting via the COCKROACH_COMPACTION_CONCURRENCY environment variable. This may help to reshape the LSM more quickly in inverted LSM scenarios; and it can lead to increased overall performance for some workloads. Cockroach Labs strongly recommends testing your workload against non-default values of this setting.

    Troubleshooting

    After the upgrade has finalized (whether manually or automatically), it is no longer possible to downgrade to the previous release. If you are experiencing problems, we therefore recommend that you:

    1. Run the cockroach debug zip command against any node in the cluster to capture your cluster's state.

    2. Reach out for support from Cockroach Labs, sharing your debug zip.

    In the event of catastrophic failure or corruption, the only option will be to start a new cluster using the previous binary and then restore from one of the backups created prior to performing the upgrade.

    See also


    Yes No
On this page

Yes No