Operate CockroachDB on Kubernetes

On this page Carat arrow pointing down
Warning:
CockroachDB v21.1 is no longer supported. For more details, see the Release Support Policy.
Note:

This article assumes you have already deployed CockroachDB securely on a single Kubernetes cluster. However, it's possible to configure these settings before starting CockroachDB on Kubernetes.

You can configure, scale, and upgrade a CockroachDB deployment on Kubernetes by updating its StatefulSet values. This page describes how to:

Note:

All kubectl steps should be performed in the namespace where you installed the Operator. By default, this is cockroach-operator-system.

Tip:

If you deployed CockroachDB on Red Hat OpenShift, substitute kubectl with oc in the following commands.

Apply settings

Cluster parameters are configured in a CrdbCluster custom resource object. This tells the Operator how to configure the Kubernetes cluster. We provide a custom resource template called example.yaml:

# Copyright 2024 The Cockroach Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Generated, do not edit. Please edit this file instead: config/templates/example.yaml.in
#

apiVersion: crdb.cockroachlabs.com/v1alpha1
kind: CrdbCluster
metadata:
  # this translates to the name of the statefulset that is created
  name: cockroachdb
spec:
  dataStore:
    pvc:
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: "60Gi"
        volumeMode: Filesystem
  resources:
    requests:
      # This is intentionally low to make it work on local k3d clusters.
      cpu: 500m
      memory: 2Gi
    limits:
      cpu: 2
      memory: 8Gi
  tlsEnabled: true
# You can set either a version of the db or a specific image name
# cockroachDBVersion: v23.2.3
  image:
    name: cockroachdb/cockroach:v23.2.3
  # nodes refers to the number of crdb pods that are created
  # via the statefulset
  nodes: 3
  additionalLabels:
    crdb: is-cool
  # affinity is a new API field that is behind a feature gate that is
  # disabled by default.  To enable please see the operator.yaml file.

  # The affinity field will accept any podSpec affinity rule.
  # affinity:
  #   podAntiAffinity:
  #      preferredDuringSchedulingIgnoredDuringExecution:
  #      - weight: 100
  #        podAffinityTerm:
  #          labelSelector:
  #            matchExpressions:
  #            - key: app.kubernetes.io/instance
  #              operator: In
  #              values:
  #              - cockroachdb
  #          topologyKey: kubernetes.io/hostname

  # nodeSelectors used to match against
  # nodeSelector:
  #   worker-pool-name: crdb-workers

It's simplest to download and customize a local copy of the custom resource manifest. After you modify its parameters, run this command to apply the new values to the cluster:

icon/buttons/copy
$ kubectl apply -f example.yaml

You will see:

crdbcluster.crdb.cockroachlabs.com/{cluster-name} configured

The Operator will trigger a rolling restart of the pods to effect the change, if necessary. You can observe its progress by running kubectl get pods.

Apply settings

Cluster parameters are configured in the StatefulSet manifest. We provide a StatefulSet template for use in our deployment tutorial.

It's simplest to download and customize a local copy of the manifest file. After you modify its parameters, run this command to apply the new values to the cluster:

icon/buttons/copy
$ kubectl apply -f {manifest-filename}.yaml

You will see:

crdbcluster.crdb.cockroachlabs.com/{cluster-name} configured

Allocate resources

On a production cluster, the resources you allocate to CockroachDB should be proportionate to your machine types and workload. We recommend that you determine and set these values before deploying the cluster, but you can also update the values on a running cluster.

Tip:

Run kubectl describe nodes to see the available resources on the instances that you have provisioned.

Memory and CPU

You can set the CPU and memory resources allocated to the CockroachDB container on each pod.

Note:

1 CPU in Kubernetes is equivalent to 1 vCPU or 1 hyperthread. For best practices on provisioning CPU and memory for CockroachDB, see the Production Checklist.

Specify CPU and memory values in resources.requests and resources.limits in the custom resource:

spec:
  resources:
    requests:
      cpu: "4"
      memory: "16Gi"
    limits:
      cpu: "4"
      memory: "16Gi"

Then apply the new values to the cluster.

Specify CPU and memory values in resources.requests and resources.limits in the StatefulSet manifest:

spec:
  template:
    containers:
    - name: cockroachdb
      resources:
        requests:
          cpu: "4"
          memory: "16Gi"
        limits:
          cpu: "4"
          memory: "16Gi"

Then apply the new values to the cluster.

Specify CPU and memory values in resources.requests and resources.limits in a custom values file:

statefulset:
  resources:
    limits:
      cpu: "4"
      memory: "16Gi"
    requests:
      cpu: "4"
      memory: "16Gi"

Apply the custom values to override the default Helm chart values:

icon/buttons/copy
$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb

We recommend using identical values for resources.requests and resources.limits. When setting the new values, note that not all of a pod's resources will be available to the CockroachDB container. This is because a fraction of the CPU and memory is reserved for Kubernetes. For more information on how Kubernetes handles resource requests and limits, see the Kubernetes documentation.

Note:

If no resource limits are specified, the pods will be able to consume the maximum available CPUs and memory. However, to avoid overallocating resources when another memory-intensive workload is on the same instance, always set resource requests and limits explicitly.

Cache and SQL memory size

Each CockroachDB node reserves a portion of its available memory for its cache and for storing temporary data for SQL queries. For more information on these settings, see the Production Checklist.

Our Kubernetes manifests dynamically set cache size and SQL memory size each to 1/4 (the recommended fraction) of the available memory, which depends on the memory request and limit you specified for your configuration. If you want to customize these values, set them explicitly.

Specify cache and maxSQLMemory in the custom resource:

spec:
  cache: "4Gi"
  maxSQLMemory: "4Gi"

Then apply the new values to the cluster.

Note:

Specifying these values is equivalent to using the --cache and --max-sql-memory flags with cockroach start.

For more information on resources, see the Kubernetes documentation.

Provision volumes

When you start your cluster, Kubernetes dynamically provisions and mounts a persistent volume into each pod. For more information on persistent volumes, see the Kubernetes documentation.

The storage capacity of each volume is set in pvc.spec.resources in the custom resource:

spec:
  dataStore:
    pvc:
      spec:
        resources:
          limits:
            storage: "60Gi"
          requests:
            storage: "60Gi"

The storage capacity of each volume is initially set in volumeClaimTemplates.spec.resources in the StatefulSet manifest:

volumeClaimTemplates:
  spec:
    resources:
      requests:
        storage: 100Gi

The storage capacity of each volume is initially set in the Helm chart's values.yaml:

persistentVolume:
  size: 100Gi

You should provision an appropriate amount of disk storage for your workload. For recommendations on this, see the Production Checklist.

Expand disk size

If you discover that you need more capacity, you can expand the persistent volumes on a running cluster. Increasing disk size is often beneficial for CockroachDB performance.

Specify a new volume size in resources.requests and resources.limits in the custom resource:

spec:
  dataStore:
    pvc:
      spec:
        resources:
          limits:
            storage: "100Gi"
          requests:
            storage: "100Gi"

Then apply the new values to the cluster. The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new storage capacity.

To verify that the storage capacity has been updated, run kubectl get pvc to view the persistent volume claims (PVCs). It will take a few minutes before the PVCs are completely updated.

You can expand certain types of persistent volumes (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.

Note:

These steps assume you followed the tutorial Deploy CockroachDB on Kubernetes.

  1. Get the persistent volume claims for the volumes:

    icon/buttons/copy
    $ kubectl get pvc
    
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-cockroachdb-1   Bound    pvc-75e143ca-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-cockroachdb-2   Bound    pvc-75ef409a-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    
  2. In order to expand a persistent volume claim, AllowVolumeExpansion in its storage class must be true. Examine the storage class:

    icon/buttons/copy
    $ kubectl describe storageclass standard
    
    Name:                  standard
    IsDefaultClass:        Yes
    Annotations:           storageclass.kubernetes.io/is-default-class=true
    Provisioner:           kubernetes.io/gce-pd
    Parameters:            type=pd-standard
    AllowVolumeExpansion:  False
    MountOptions:          <none>
    ReclaimPolicy:         Delete
    VolumeBindingMode:     Immediate
    Events:                <none>
    

    If necessary, edit the storage class:

    icon/buttons/copy
    $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
    
    storageclass.storage.k8s.io/standard patched
    
  3. Edit one of the persistent volume claims to request more space:

    Note:

    The requested storage value must be larger than the previous value. You cannot use this method to decrease the disk size.

    icon/buttons/copy
    $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
    
    persistentvolumeclaim/datadir-cockroachdb-0 patched
    
  4. Check the capacity of the persistent volume claim:

    icon/buttons/copy
    $ kubectl get pvc datadir-cockroachdb-0
    
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       18m
    

    If the PVC capacity has not changed, this may be because AllowVolumeExpansion was initially set to false or because the volume has a file system that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.

    Tip:

    Running kubectl get pv will display the persistent volumes with their requested capacity and not their actual capacity. This can be misleading, so it's best to use kubectl get pvc.

  5. Examine the persistent volume claim. If the volume has a file system, you will see a FileSystemResizePending condition with an accompanying message:

    icon/buttons/copy
    $ kubectl describe pvc datadir-cockroachdb-0
    
    Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    
  6. Delete the corresponding pod to restart it:

    icon/buttons/copy
    $ kubectl delete pod cockroachdb-0
    

    The FileSystemResizePending condition and message will be removed.

  7. View the updated persistent volume claim:

    icon/buttons/copy
    $ kubectl get pvc datadir-cockroachdb-0
    
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   200Gi      RWO            standard       20m
    
  8. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.

You can expand certain types of persistent volumes (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims.

Note:

These steps assume you followed the tutorial Deploy CockroachDB on Kubernetes.

  1. Get the persistent volume claims for the volumes:

    icon/buttons/copy
    $ kubectl get pvc
    
    NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-my-release-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-my-release-cockroachdb-1   Bound    pvc-75e143ca-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-my-release-cockroachdb-2   Bound    pvc-75ef409a-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    
  2. In order to expand a persistent volume claim, AllowVolumeExpansion in its storage class must be true. Examine the storage class:

    icon/buttons/copy
    $ kubectl describe storageclass standard
    
    Name:                  standard
    IsDefaultClass:        Yes
    Annotations:           storageclass.kubernetes.io/is-default-class=true
    Provisioner:           kubernetes.io/gce-pd
    Parameters:            type=pd-standard
    AllowVolumeExpansion:  False
    MountOptions:          <none>
    ReclaimPolicy:         Delete
    VolumeBindingMode:     Immediate
    Events:                <none>
    

    If necessary, edit the storage class:

    icon/buttons/copy
    $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}'
    
    storageclass.storage.k8s.io/standard patched
    
  3. Edit one of the persistent volume claims to request more space:

    Note:

    The requested storage value must be larger than the previous value. You cannot use this method to decrease the disk size.

    icon/buttons/copy
    $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}'
    
    persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched
    
  4. Check the capacity of the persistent volume claim:

    icon/buttons/copy
    $ kubectl get pvc datadir-my-release-cockroachdb-0
    
    NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-my-release-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       18m
    

    If the PVC capacity has not changed, this may be because AllowVolumeExpansion was initially set to false or because the volume has a file system that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity.

    Tip:

    Running kubectl get pv will display the persistent volumes with their requested capacity and not their actual capacity. This can be misleading, so it's best to use kubectl get pvc.

  5. Examine the persistent volume claim. If the volume has a file system, you will see a FileSystemResizePending condition with an accompanying message:

    icon/buttons/copy
    $ kubectl describe pvc datadir-my-release-cockroachdb-0
    
    Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    
  6. Delete the corresponding pod to restart it:

    icon/buttons/copy
    $ kubectl delete pod my-release-cockroachdb-0
    

    The FileSystemResizePending condition and message will be removed.

  7. View the updated persistent volume claim:

    icon/buttons/copy
    $ kubectl get pvc datadir-my-release-cockroachdb-0
    
    NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-my-release-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   200Gi      RWO            standard       20m
    
  8. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount.

Use a custom CA

By default, the Operator will generate and sign 1 client and 1 node certificate to secure the cluster.

To use your own certificate authority instead, add clientTLSSecret and nodeTLSSecret to the custom resource. These should specify the names of Kubernetes secrets that contain your generated certificates and keys. For details on creating Kubernetes secrets, see the Kubernetes documentation.

Note:

Currently, the Operator requires that the client and node secrets each contain the filenames tls.crt and tls.key. For an example of working with this, see Authenticating with cockroach cert.

spec:
  nodeTLSSecret: {node secret name}
  clientTLSSecret: {client secret name}

Then apply the new values to the cluster.

Example: Authenticating with cockroach cert

These steps demonstrate how certificates and keys generated by cockroach cert can be used by the Operator.

  1. Create two directories:

    icon/buttons/copy
    $ mkdir certs my-safe-directory
    
    Directory Description
    certs You'll generate your CA certificate and all node and client certificates and keys in this directory.
    my-safe-directory You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
  2. Create the CA certificate and key pair:

    icon/buttons/copy
    $ cockroach cert create-ca \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  3. Create a client certificate and key pair for the root user:

    icon/buttons/copy
    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  4. Upload the client certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the Operator:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.client.root \
    --from-file=tls.key=certs/client.root.key \
    --from-file=tls.crt=certs/client.root.crt \
    --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.client.root created
    
  5. Create the certificate and key pair for your CockroachDB nodes:

    icon/buttons/copy
    cockroach cert create-node \
    localhost 127.0.0.1 \
    cockroachdb-public \
    cockroachdb-public.default \
    cockroachdb-public.default.svc.cluster.local \
    *.cockroachdb \
    *.cockroachdb.default \
    *.cockroachdb.default.svc.cluster.local \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  6. Upload the node certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the Operator:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.node \
    --from-file=tls.key=certs/node.key \
    --from-file=tls.crt=certs/node.crt \
    --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.node created
    
  7. Check that the secrets were created on the cluster:

    icon/buttons/copy
    $ kubectl get secrets
    
    NAME                      TYPE                                  DATA   AGE
    cockroachdb.client.root   Opaque                                3      13s
    cockroachdb.node          Opaque                                3      3s
    default-token-6js7b       kubernetes.io/service-account-token   3      9h
    
  8. Add nodeTLSSecret and clientTLSSecret to the custom resource, specifying the generated secret names:

    spec:
      clientTLSSecret: cockroachdb.client.root
      nodeTLSSecret: cockroachdb.node
    

    Then apply the new values to the cluster.

Rotate security certificates

You may need to rotate the node, client, or CA certificates in the following scenarios:

  • The node, client, or CA certificates are expiring soon.
  • Your organization's compliance policy requires periodic certificate rotation.
  • The key (for a node, client, or CA) is compromised.
  • You need to modify the contents of a certificate, for example, to add another DNS name or the IP address of a load balancer through which a node can be reached. In this case, you would need to rotate only the node certificates.

Example: Rotating certificates signed with cockroach cert

If you previously authenticated with cockroach cert, follow these steps to rotate the certificates using the same CA:

  1. Create a new client certificate and key pair for the root user, overwriting the previous certificate and key:

    icon/buttons/copy
    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    --overwrite
    
  2. Delete the existing client secret:

    icon/buttons/copy
    $ kubectl delete secret cockroachdb.client.root
    
    secret "cockroachdb.client.root" deleted
    
  3. Upload the new client certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the Operator:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.client.root \
    --from-file=tls.key=certs/client.root.key \
    --from-file=tls.crt=certs/client.root.crt \
    --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.client.root created
    
  4. Create a new certificate and key pair for your CockroachDB nodes, overwriting the previous certificate and key:

    icon/buttons/copy
    cockroach cert create-node \
    localhost 127.0.0.1 \
    cockroachdb-public \
    cockroachdb-public.default \
    cockroachdb-public.default.svc.cluster.local \
    *.cockroachdb \
    *.cockroachdb.default \
    *.cockroachdb.default.svc.cluster.local \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    --overwrite
    
  5. Delete the existing node secret:

    icon/buttons/copy
    $ kubectl delete secret cockroachdb.node
    
    secret "cockroachdb.node" deleted
    
  6. Upload the new node certificate and key to the Kubernetes cluster as a secret, renaming them to the filenames required by the Operator:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.node \
    --from-file=tls.key=certs/node.key \
    --from-file=tls.crt=certs/node.crt \
    --from-file=ca.crt=certs/ca.crt
    
    secret/cockroachdb.node created
    
  7. Check that the secrets were created on the cluster:

    icon/buttons/copy
    $ kubectl get secrets
    
    NAME                      TYPE                                  DATA   AGE
    cockroachdb.client.root   Opaque                                3      4s
    cockroachdb.node          Opaque                                3      1s
    default-token-6js7b       kubernetes.io/service-account-token   3      9h
    
    Note:

    Remember that nodeTLSSecret and clientTLSSecret in the custom resource must specify these secret names. For details, see Use a custom CA.

  8. Trigger a rolling restart of the pods by annotating the cluster (named cockroachdb in our example):

    icon/buttons/copy
    kubectl annotate {cluster-name} cockroachdb crdb.io/restarttype='rolling'
    
    Tip:

    If you used a different CA to sign the new certificates, trigger a full restart of the cluster instead: kubectl annotate {cluster-name} cockroachdb crdb.io/restarttype='fullcluster'.

    Note: A full restart will cause a temporary database outage.

    crdbcluster.crdb.cockroachlabs.com/cockroachdb annotated
    

    The pods will terminate and restart one at a time, using the new certificates.

  9. You can observe this process:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                                  READY   STATUS        RESTARTS   AGE
    cockroach-operator-655fbf7847-lvz6x   1/1     Running       0          4h29m
    cockroachdb-0                         1/1     Running       0          4h16m
    cockroachdb-1                         1/1     Terminating   0          4h16m
    cockroachdb-2                         1/1     Running       0          43s
    

Secure the webhooks

The Operator ships with both mutating and validating webhooks. Communication between the Kubernetes API server and the webhook service must be secured with TLS.

By default, the Operator searches for the TLS secret cockroach-operator-webhook-ca, which contains a CA certificate. If the secret is not found, the Operator auto-generates cockroach-operator-webhook-ca with a CA certificate for future runs.

The Operator then generates a one-time server certificate for the webhook server that is signed with cockroach-operator-webhook-ca. Finally, the CA bundle for both mutating and validating webhook configurations is patched with the CA certificate.

You can also use your own certificate authority rather than cockroach-operator-webhook-ca. Both the certificate and key files you generate must be PEM-encoded. See the following example.

Example: Using OpenSSL to secure the webhooks

These steps demonstrate how to use the openssl genrsa and openssl req subcommands to secure the webhooks on a running Kubernetes cluster:

  1. Generate a 4096-bit RSA private key:

    icon/buttons/copy
    openssl genrsa -out tls.key 4096
    
  2. Generate an X.509 certificate, valid for 10 years. You will be prompted for the certificate field values.

    icon/buttons/copy
    openssl req -x509 -new -nodes -key tls.key -sha256 -days 3650 -out tls.crt
    
  3. Create the secret, making sure that you are in the correct namespace:

    icon/buttons/copy
    kubectl create secret tls cockroach-operator-webhook-ca --cert=tls.crt --key=tls.key
    
    secret/cockroach-operator-webhook-ca created
    
  4. Remove the certificate and key from your local environment:

    icon/buttons/copy
    rm tls.crt tls.key
    
  5. Roll the Operator deployment to ensure a new server certificate is generated:

    icon/buttons/copy
    kubectl rollout restart deploy/cockroach-operator-manager
    
    deployment.apps/cockroach-operator-manager restarted
    

Use a custom CA

By default on secure deployments, the Helm chart will generate and sign 1 client and 1 node certificate to secure the cluster.

Warning:

If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see Migration to self-signer.

To use your own certificate authority instead, specify the following in a custom values file:

icon/buttons/copy
tls:
  enabled: true
  certs:
    provided: true
    clientRootSecret: {client secret name}
    nodeSecret: {node secret name}

clientRootSecret and nodeSecret should specify the names of Kubernetes secrets that contain your generated certificates and keys:

  • clientRootSecret specifies the client secret name.
  • nodeSecret specifies the node secret name.

Apply the custom values to override the default Helm chart values:

icon/buttons/copy
$ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb

Example: Authenticating with cockroach cert

Note:

The below steps use cockroach cert commands to quickly generate and sign the CockroachDB node and client certificates. Read our Authentication docs to learn about other methods of signing certificates.

  1. Create two directories:

    icon/buttons/copy
    $ mkdir certs my-safe-directory
    
    Directory Description
    certs You'll generate your CA certificate and all node and client certificates and keys in this directory.
    my-safe-directory You'll generate your CA key in this directory and then reference the key when generating node and client certificates.
  2. Create the CA certificate and key pair:

    icon/buttons/copy
    $ cockroach cert create-ca \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  3. Create a client certificate and key pair for the root user:

    icon/buttons/copy
    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  4. Upload the client certificate and key to the Kubernetes cluster as a secret:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.client.root \
    --from-file=certs
    
    secret/cockroachdb.client.root created
    
  5. Create the certificate and key pair for your CockroachDB nodes:

    icon/buttons/copy
    $ cockroach cert create-node \
    localhost 127.0.0.1 \
    my-release-cockroachdb-public \
    my-release-cockroachdb-public.default \
    my-release-cockroachdb-public.default.svc.cluster.local \
    *.my-release-cockroachdb \
    *.my-release-cockroachdb.default \
    *.my-release-cockroachdb.default.svc.cluster.local \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
    Note:

    This example assumes that you followed our deployment example, which uses my-release as the release name. If you used a different value, be sure to adjust the release name in this command.

  6. Upload the node certificate and key to the Kubernetes cluster as a secret:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.node \
    --from-file=certs
    
    secret/cockroachdb.node created
    
  7. Check that the secrets were created on the cluster:

    icon/buttons/copy
    $ kubectl get secrets
    
    NAME                      TYPE                                  DATA   AGE
    cockroachdb.client.root   Opaque                                3      41m
    cockroachdb.node          Opaque                                5      14s
    default-token-6qjdb       kubernetes.io/service-account-token   3      4m
    
  8. Specify the following in a custom values file, using the generated secret names:

    icon/buttons/copy
    tls:
      enabled: true
      certs:
        provided: true
      clientRootSecret: cockroachdb.client.root
      nodeSecret: cockroachdb.node
    
  9. Apply the custom values to override the default Helm chart values:

    icon/buttons/copy
    $ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb
    

Rotate security certificates

You may need to rotate the node, client, or CA certificates in the following scenarios:

  • The node, client, or CA certificates are expiring soon.
  • Your organization's compliance policy requires periodic certificate rotation.
  • The key (for a node, client, or CA) is compromised.
  • You need to modify the contents of a certificate, for example, to add another DNS name or the IP address of a load balancer through which a node can be reached. In this case, you would need to rotate only the node certificates.

Example: Rotating certificates signed with cockroach cert

The Helm chart includes values to configure a Kubernetes cron job that regularly rotates certificates before they expire.

If you previously authenticated with cockroach cert, follow these steps to ensure the certificates are rotated:

  1. Upload the CA certificate that you previously created to the Kubernetes cluster as a secret:

    icon/buttons/copy
    $ kubectl create secret \
    generic cockroachdb.ca \
    --from-file=certs/ca.crt
    
    secret/cockroachdb.ca created
    
  2. Specify the following in a custom values file, using the generated secret name:

    icon/buttons/copy
    selfSigner:
      enabled: true
      caProvided: true
      caSecret: cockroachdb.ca
      rotateCerts: true
    
    Note:

    selfSigner.enabled and selfSigner.rotateCerts are true by default in the Helm chart values.

  3. Customize the following selfSigner fields to set the frequency of certificate rotation. These should correspond to the durations of the CA, client, and node certificates.

    icon/buttons/copy
    selfSigner:
      minimumCertDuration: 624h
      caCertDuration: 43800h
      caCertExpiryWindow: 648h
      clientCertDuration: 672h
      clientCertExpiryWindow: 48h
      nodeCertDuration: 8760h
      nodeCertExpiryWindow: 168h
    
    • caCertDuration, clientCertDuration, and nodeCertDuration specify the duration in hours of the CA, client, and node certificates, respectively.
    • caCertExpiryWindow, clientCertExpiryWindow, and nodeCertExpiryWindow specify the timeframe in hours during which the CA, client, and node certificates, respectively, should be rotated before they expire.
    • minimumCertDuration specifies the minimum duration in hours for all certificates. This is to ensure that the client and node certificates are rotated within the duration of the CA certificate. This value must be less than:
      • cacertExpiryWindow
      • The difference of clientCertDuration and clientExpiryWindow
      • The difference of nodeCertDuration and nodeCertExpiryWindow

    Certificate duration is configured when running cockroach cert. You can check the expiration dates of the cockroach cert certificates by running:

    icon/buttons/copy
    cockroach cert list --certs-dir=certs
    
    Certificate directory: certs
      Usage  | Certificate File |    Key File     |  Expires   |                                                                                                                                  Notes                                                                                                                                  | Error
    ---------+------------------+-----------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------
      CA     | ca.crt           |                 | 2031/09/07 | num certs: 1                                                                                                                                                                                                                                                            |
      Node   | node.crt         | node.key        | 2026/09/03 | addresses: localhost,my-release-cockroachdb-public,my-release-cockroachdb-public.default,my-release-cockroachdb-public.default.svc.cluster.local,*.my-release-cockroachdb,*.my-release-cockroachdb.default,*.my-release-cockroachdb.default.svc.cluster.local,127.0.0.1 |
      Client | client.root.crt  | client.root.key | 2026/09/03 | user: root                                                                                                                                                                                                                                                              |
    
  4. Apply the custom values to override the default Helm chart values:

    icon/buttons/copy
    $ helm upgrade {release-name} --values {custom-values}.yaml cockroachdb/cockroachdb
    

    The certificates will be rotated during the specified expiry windows.

Migration to self-signer

Previous versions of the Helm chart used the Kubernetes CA to sign certificates. However, the Kubernetes CA is deprecated from Kubernetes 1.22 and later. The Helm chart now uses a self-signer for cluster authentication.

To migrate your Helm deployment to use the self-signer:

  1. Set the cluster's upgrade strategy to OnDelete, which specifies that only pods deleted by the user will be upgraded.

    icon/buttons/copy
    helm upgrade {release-name} cockroachdb/cockroachdb --set statefulset.updateStrategy.type="OnDelete"
    
  2. Confirm that the init pod has been created:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                                READY     STATUS     RESTARTS   AGE
    my-release-cockroachdb-0            1/1       Running    0          6m
    my-release-cockroachdb-1            1/1       Running    0          6m
    my-release-cockroachdb-2            1/1       Running    0          6m
    my-release-cockroachdb-init-59ndf   0/1       Completed  0          8s
    
  3. Delete the cluster pods to start the upgrade process.

    icon/buttons/copy
    kubectl delete pods -l app.kubernetes.io/component=cockroachdb
    
    pod "my-release-cockroachdb-0" deleted
    pod "my-release-cockroachdb-1" deleted
    pod "my-release-cockroachdb-2" deleted
    

    All pods will be restarted with new certificates generated by the self-signer. Note that this is not a rolling upgrade, so the cluster will experience some downtime. You can monitor this process:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                       READY   STATUS              RESTARTS   AGE
    my-release-cockroachdb-0   0/1     ContainerCreating   0          14s
    my-release-cockroachdb-1   0/1     ContainerCreating   0          4s
    my-release-cockroachdb-2   0/1     Terminating         0          7m16s
    

Configure ports

The Operator separates traffic into three ports:

Protocol Default Description Custom Resource Field
gRPC 26258 Used for node connections grpcPort
HTTP 8080 Used to access the DB Console httpPort
SQL 26257 Used for SQL shell access sqlPort

Specify alternate port numbers in the custom resource (for example, to match the default port 5432 on PostgreSQL):

spec:
  sqlPort: 5432

Then apply the new values to the cluster. The Operator updates the StatefulSet and triggers a rolling restart of the pods with the new port settings.

Warning:

Currently, only the pods are updated with new ports. To connect to the cluster, you need to ensure that the public service is also updated to use the new port. You can do this by deleting the service with kubectl delete service {cluster-name}-public. When service is recreated by the Operator, it will use the new port. This is a known limitation that will be fixed in an Operator update.

Scale the cluster

Add nodes

Before scaling up CockroachDB, note the following topology recommendations:

  • Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node.
  • Each availability zone should have the same number of CockroachDB nodes.

If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our deployment example), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone.

  1. Run kubectl get nodes to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node.

  2. If you need to add worker nodes, resize your GKE cluster by specifying the desired number of worker nodes in each zone:

    icon/buttons/copy
    gcloud container clusters resize {cluster-name} --region {region-name} --num-nodes 2
    

    This example distributes 2 worker nodes across the default 3 zones, raising the total to 6 worker nodes.

    1. If you are adding nodes after previously scaling down, and have not enabled automatic PVC pruning, you must first manually delete any persistent volumes that were orphaned by node removal.

      Note:

      Due to a known issue, automatic pruning of PVCs is currently disabled by default. This means that after decommissioning and removing a node, the Operator will not remove the persistent volume that was mounted to its pod.

      View the PVCs on the cluster:

      icon/buttons/copy
      kubectl get pvc
      
      NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      datadir-cockroachdb-0   Bound    pvc-f1ce6ed2-ceda-40d2-8149-9e5b59faa9df   60Gi       RWO            standard       24m
      datadir-cockroachdb-1   Bound    pvc-308da33c-ec77-46c7-bcdf-c6e610ad4fea   60Gi       RWO            standard       24m
      datadir-cockroachdb-2   Bound    pvc-6816123f-29a9-4b86-a4e2-b67f7bb1a52c   60Gi       RWO            standard       24m
      datadir-cockroachdb-3   Bound    pvc-63ce836a-1258-4c58-8b37-d966ed12d50a   60Gi       RWO            standard       24m
      datadir-cockroachdb-4   Bound    pvc-1ylabv86-6512-6n12-bw3g-i0dh2zxvfhd0   60Gi       RWO            standard       24m
      datadir-cockroachdb-5   Bound    pvc-2vka2c9x-7824-41m5-jk45-mt7dzq90q97x   60Gi       RWO            standard       24m
      
    2. The PVC names correspond to the pods they are bound to. For example, if the pods cockroachdb-3, cockroachdb-4, and cockroachdb-5 had been removed by scaling the cluster down from 6 to 3 nodes, datadir-cockroachdb-3, datadir-cockroachdb-4, and datadir-cockroachdb-5 would be the PVCs for the orphaned persistent volumes. To verify that a PVC is not currently bound to a pod:

      icon/buttons/copy
      kubectl describe pvc datadir-cockroachdb-5
      

      The output will include the following line:

      Mounted By:    <none>
      

      If the PVC is bound to a pod, it will specify the pod name.

    3. Remove the orphaned persistent volumes by deleting their PVCs:

      Warning:

      Before deleting any persistent volumes, be sure you have a backup copy of your data. Data cannot be recovered once the persistent volumes are deleted. For more information, see the Kubernetes documentation.

      icon/buttons/copy
      kubectl delete pvc datadir-cockroachdb-3 datadir-cockroachdb-4 datadir-cockroachdb-5
      
      persistentvolumeclaim "datadir-cockroachdb-3" deleted
      persistentvolumeclaim "datadir-cockroachdb-4" deleted
      persistentvolumeclaim "datadir-cockroachdb-5" deleted
      
  3. Update nodes in the custom resource with the target size of the CockroachDB cluster. This value refers to the number of CockroachDB nodes, each running in one pod:

    nodes: 6
    
    Note:

    Note that you must scale by updating the nodes value in the custom resource. Using kubectl scale statefulset <cluster-name> --replicas=4 will result in new pods immediately being terminated.

  4. Apply the new value.

  5. Verify that the new pods were successfully started:

    icon/buttons/copy
    kubectl get pods
    
    NAME                                  READY   STATUS    RESTARTS   AGE
    cockroach-operator-655fbf7847-zn9v8   1/1     Running   0          30m
    cockroachdb-0                         1/1     Running   0          24m
    cockroachdb-1                         1/1     Running   0          24m
    cockroachdb-2                         1/1     Running   0          24m
    cockroachdb-3                         1/1     Running   0          30s
    cockroachdb-4                         1/1     Running   0          30s
    cockroachdb-5                         1/1     Running   0          30s
    

    Each pod should be running in one of the 6 worker nodes.

Before scaling up CockroachDB, note the following topology recommendations:

  • Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node.
  • Each availability zone should have the same number of CockroachDB nodes.

If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our deployment example), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone.

  1. Run kubectl get nodes to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node.

  2. Add worker nodes if necessary:

  3. Edit your StatefulSet configuration to add pods for each new CockroachDB node:

    icon/buttons/copy
    $ kubectl scale statefulset cockroachdb --replicas=6
    
    statefulset.apps/cockroachdb scaled
    
  4. Verify that the new pod started successfully:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                        READY     STATUS    RESTARTS   AGE
    cockroachdb-0               1/1       Running   0          51m
    cockroachdb-1               1/1       Running   0          47m
    cockroachdb-2               1/1       Running   0          3m
    cockroachdb-3               1/1       Running   0          1m
    cockroachdb-4               1/1       Running   0          1m
    cockroachdb-5               1/1       Running   0          1m
    cockroachdb-client-secure   1/1       Running   0          15m
    ...
    
  5. You can also open the Node List in the DB Console to ensure that the fourth node successfully joined the cluster.

Before scaling CockroachDB, ensure that your Kubernetes cluster has enough worker nodes to host the number of pods you want to add. This is to ensure that two pods are not placed on the same worker node, as recommended in our production guidance.

For example, if you want to scale from 3 CockroachDB nodes to 4, your Kubernetes cluster should have at least 4 worker nodes. You can verify the size of your Kubernetes cluster by running kubectl get nodes.

  1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node:

    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set statefulset.replicas=4 \
    --reuse-values
    
    Release "my-release" has been upgraded. Happy Helming!
    LAST DEPLOYED: Tue May 14 14:06:43 2019
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/PodDisruptionBudget
    NAME                           AGE
    my-release-cockroachdb-budget  51m
    
    ==> v1/Pod(related)
    
    NAME                               READY  STATUS     RESTARTS  AGE
    my-release-cockroachdb-0           1/1    Running    0         38m
    my-release-cockroachdb-1           1/1    Running    0         39m
    my-release-cockroachdb-2           1/1    Running    0         39m
    my-release-cockroachdb-3           0/1    Pending    0         0s
    my-release-cockroachdb-init-nwjkh  0/1    Completed  0         39m
    
    ...
    
  2. Get the name of the Pending CSR for the new pod:

    icon/buttons/copy
    $ kubectl get csr
    
    NAME                                                   AGE       REQUESTOR                               CONDITION
    default.client.root                                    1h        system:serviceaccount:default:default   Approved,Issued
    default.node.my-release-cockroachdb-0                  1h        system:serviceaccount:default:default   Approved,Issued
    default.node.my-release-cockroachdb-1                  1h        system:serviceaccount:default:default   Approved,Issued
    default.node.my-release-cockroachdb-2                  1h        system:serviceaccount:default:default   Approved,Issued
    default.node.my-release-cockroachdb-3                  2m        system:serviceaccount:default:default   Pending
    node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4   1h        kubelet                                 Approved,Issued
    node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY   1h        kubelet                                 Approved,Issued
    node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o   1h        kubelet                                 Approved,Issued
    ...
    

    If you do not see a Pending CSR, wait a minute and try again.

  3. Examine the CSR for the new pod:

    icon/buttons/copy
    $ kubectl describe csr default.node.my-release-cockroachdb-3
    
    Name:               default.node.my-release-cockroachdb-3
    Labels:             <none>
    Annotations:        <none>
    CreationTimestamp:  Thu, 09 Nov 2017 13:39:37 -0500
    Requesting User:    system:serviceaccount:default:default
    Status:             Pending
    Subject:
      Common Name:    node
      Serial Number:
      Organization:   Cockroach
    Subject Alternative Names:
             DNS Names:     localhost
                            my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local
                            my-release-cockroachdb-1.my-release-cockroachdb
                            my-release-cockroachdb-public
                            my-release-cockroachdb-public.default.svc.cluster.local
             IP Addresses:  127.0.0.1
                            10.48.1.6
    Events:  <none>
    
  4. If everything looks correct, approve the CSR for the new pod:

    icon/buttons/copy
    $ kubectl certificate approve default.node.my-release-cockroachdb-3
    
    certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved
    
  5. Verify that the new pod started successfully:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                        READY     STATUS    RESTARTS   AGE
    my-release-cockroachdb-0    1/1       Running   0          51m
    my-release-cockroachdb-1    1/1       Running   0          47m
    my-release-cockroachdb-2    1/1       Running   0          3m
    my-release-cockroachdb-3    1/1       Running   0          1m
    cockroachdb-client-secure   1/1       Running   0          15m
    ...
    
  6. You can also open the Node List in the DB Console to ensure that the fourth node successfully joined the cluster.

Remove nodes

Warning:

Do not scale down to fewer than 3 nodes. This is considered an anti-pattern on CockroachDB and will cause errors.

Warning:

Due to a known issue, automatic pruning of PVCs is currently disabled by default. This means that after decommissioning and removing a node, the Operator will not remove the persistent volume that was mounted to its pod.

If you plan to eventually scale up the cluster after scaling down, you will need to manually delete any PVCs that were orphaned by node removal before scaling up. For more information, see Add nodes.

Note:

If you want to enable the Operator to automatically prune PVCs when scaling down, see Automatic PVC pruning. However, note that this workflow is currently unsupported.

Before scaling down CockroachDB, note the following topology recommendation:

  • Each availability zone should have the same number of CockroachDB nodes.

If your nodes are distributed across 3 availability zones (as in our deployment example), we recommend scaling down by a multiple of 3 to retain an even distribution. If your cluster has 6 CockroachDB nodes, you should therefore scale down to 3, with 1 node in each zone.

  1. Update nodes in the custom resource with the target size of the CockroachDB cluster. For instance, to scale down to 3 nodes:

    nodes: 3
    
    Note:

    Before removing a node, the Operator first decommissions the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.

  2. Apply the new value.

    The Operator will remove nodes from the cluster one at a time, starting from the pod with the highest number in its address.

  3. Verify that the pods were successfully removed:

    icon/buttons/copy
    kubectl get pods
    
    NAME                                  READY   STATUS    RESTARTS   AGE
    cockroach-operator-655fbf7847-zn9v8   1/1     Running   0          32m
    cockroachdb-0                         1/1     Running   0          26m
    cockroachdb-1                         1/1     Running   0          26m
    cockroachdb-2                         1/1     Running   0          26m
    

Automatic PVC pruning

To enable the Operator to automatically remove persistent volumes when scaling down a cluster, turn on automatic PVC pruning through a feature gate.

Warning:

This workflow is unsupported and should be enabled at your own risk.

  1. Download the Operator manifest:

    icon/buttons/copy
    $ curl -0 https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v2.14.0/install/operator.yaml
    
  2. Uncomment the following lines in the Operator manifest:

    - feature-gates
    - AutoPrunePVC=true
    
  3. Reapply the Operator manifest:

    icon/buttons/copy
    $ kubectl apply -f operator.yaml
    
  4. Validate that the Operator is running:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                                  READY   STATUS    RESTARTS   AGE
    cockroach-operator-6f7b86ffc4-9ppkv   1/1     Running   0          22s
    ...
    

Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.

Warning:

If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see Decommission Nodes.

  1. Use the cockroach node status command to get the internal IDs of nodes. For example, if you followed the steps in Deploy CockroachDB with Kubernetes to launch a secure client pod, get a shell into the cockroachdb-client-secure pod:

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach node status \
    --certs-dir=/cockroach-certs \
    --host=cockroachdb-public
    
      id |               address                                     | build  |            started_at            |            updated_at            | is_available | is_live
    +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
       1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true         | true
       2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true         | true
       3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true         | true
       4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true         | true
    (4 rows)
    

    The pod uses the root client certificate created earlier to initialize the cluster, so there's no CSR approval required.

  2. Use the cockroach node decommission command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID 4 because its address is cockroachdb-3):

    Note:

    You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count.

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach node decommission 4 \
    --certs-dir=/cockroach-certs \
    --host=cockroachdb-public
    

    You'll then see the decommissioning status print to stderr as it changes:

     id | is_live | replicas | is_decommissioning | is_draining  
    +---+---------+----------+--------------------+-------------+
      4 |  true   |       73 |        true        |    false     
    (1 row)
    

    Once the node has been fully decommissioned and stopped, you'll see a confirmation:

     id | is_live | replicas | is_decommissioning | is_draining  
    +---+---------+----------+--------------------+-------------+
      4 |  true   |        0 |        true        |    false     
    (1 row)
    
    No more data reported on target nodes. Please verify cluster health before removing the nodes.
    
  3. Once the node has been decommissioned, scale down your StatefulSet:

    icon/buttons/copy
    $ kubectl scale statefulset cockroachdb --replicas=3
    
    statefulset.apps/cockroachdb scaled
    
  4. Verify that the pod was successfully removed:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                        READY     STATUS    RESTARTS   AGE
    cockroachdb-0               1/1       Running   0          51m
    cockroachdb-1               1/1       Running   0          47m
    cockroachdb-2               1/1       Running   0          3m
    cockroachdb-client-secure   1/1       Running   0          15m
    ...
    
  5. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes:

    icon/buttons/copy
    $ kubectl get pvc
    
    NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-cockroachdb-1   Bound    pvc-75e143ca-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-cockroachdb-2   Bound    pvc-75ef409a-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-cockroachdb-3   Bound    pvc-75e561ba-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    
  6. Verify that the PVC with the highest number in its name is no longer mounted to a pod:

    icon/buttons/copy
    $ kubectl describe pvc datadir-cockroachdb-3
    
    Name:          datadir-cockroachdb-3
    ...
    Mounted By:    <none>
    
  7. Remove the persistent volume by deleting the PVC:

    icon/buttons/copy
    $ kubectl delete pvc datadir-cockroachdb-3
    
    persistentvolumeclaim "datadir-cockroachdb-3" deleted
    

Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.

Warning:

If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see Decommission Nodes.

  1. Use the cockroach node status command to get the internal IDs of nodes. For example, if you followed the steps in Deploy CockroachDB with Kubernetes to launch a secure client pod, get a shell into the cockroachdb-client-secure pod:

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach node status \
    --certs-dir=/cockroach-certs \
    --host=my-release-cockroachdb-public
    
      id |                                     address                                     | build  |            started_at            |            updated_at            | is_available | is_live
    +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
       1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true         | true
       2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true         | true
       3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true         | true
       4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v21.1.21 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true         | true
    (4 rows)
    

    The pod uses the root client certificate created earlier to initialize the cluster, so there's no CSR approval required.

  2. Use the cockroach node decommission command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID 4 because its address is my-release-cockroachdb-3):

    Note:

    You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count.

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach node decommission 4 \
    --certs-dir=/cockroach-certs \
    --host=my-release-cockroachdb-public
    

    You'll then see the decommissioning status print to stderr as it changes:

     id | is_live | replicas | is_decommissioning | is_draining  
    +---+---------+----------+--------------------+-------------+
      4 |  true   |       73 |        true        |    false     
    (1 row)
    

    Once the node has been fully decommissioned and stopped, you'll see a confirmation:

     id | is_live | replicas | is_decommissioning | is_draining  
    +---+---------+----------+--------------------+-------------+
      4 |  true   |        0 |        true        |    false     
    (1 row)
    
    No more data reported on target nodes. Please verify cluster health before removing the nodes.
    
  3. Once the node has been decommissioned, scale down your StatefulSet:

    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set statefulset.replicas=3 \
    --reuse-values
    
  4. Verify that the pod was successfully removed:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                        READY     STATUS    RESTARTS   AGE
    my-release-cockroachdb-0    1/1       Running   0          51m
    my-release-cockroachdb-1    1/1       Running   0          47m
    my-release-cockroachdb-2    1/1       Running   0          3m
    cockroachdb-client-secure   1/1       Running   0          15m
    ...
    
  5. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes:

    icon/buttons/copy
    $ kubectl get pvc
    
    NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    datadir-my-release-cockroachdb-0   Bound    pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-my-release-cockroachdb-1   Bound    pvc-75e143ca-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-my-release-cockroachdb-2   Bound    pvc-75ef409a-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    datadir-my-release-cockroachdb-3   Bound    pvc-75e561ba-01a1-11ea-b065-42010a8e00cb   100Gi      RWO            standard       17m
    
  6. Verify that the PVC with the highest number in its name is no longer mounted to a pod:

    icon/buttons/copy
    $ kubectl describe pvc datadir-my-release-cockroachdb-3
    
    Name:          datadir-my-release-cockroachdb-3
    ...
    Mounted By:    <none>
    
  7. Remove the persistent volume by deleting the PVC:

    icon/buttons/copy
    $ kubectl delete pvc datadir-my-release-cockroachdb-3
    
    persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted
    

Upgrade the cluster

We strongly recommend that you regularly upgrade your CockroachDB version in order to pick up bug fixes, performance improvements, and new features.

The upgrade process on Kubernetes is a staged update in which the Docker image is applied to the pods one at a time, with each pod being stopped and restarted in turn. This is to ensure that the cluster remains available during the upgrade.

  1. Verify that you can upgrade.

    To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production release and not a testing release (alpha/beta).

    Therefore, in order to upgrade to v21.1, you must be on a production release of v20.2.

    1. If you are upgrading to v21.1 from a production release earlier than v20.2, or from a testing release (alpha/beta), first upgrade to a production release of v20.2. Be sure to complete all the steps.
    2. Then return to this page and perform a second upgrade to v21.1.
    3. If you are upgrading from a production release of v20.2, or from any earlier v21.1 patch release, you do not have to go through intermediate releases; continue to step 2.
  2. Verify the overall health of your cluster using the DB Console. On the Overview:

    • Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
    • Under Replication Status, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.
    • In the Node List:
      • Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over.
      • Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to Metrics > Dashboard: Hardware and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider adding nodes to your cluster before beginning your upgrade.
  3. Review the backward-incompatible changes in v21.1 and deprecated features. If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.1.

  4. Change the desired Docker image in the custom resource:

    image:
      name: cockroachdb/cockroach:v21.1.21
    
  5. Apply the new value. The Operator will perform the staged update.

    Note:

    The Operator automatically sets the cluster.preserve_downgrade_option cluster setting to the version you are upgrading from. This disables auto-finalization of the upgrade so that you can monitor the stability and performance of the upgraded cluster before manually finalizing the upgrade. This will enable certain features and performance improvements introduced in v21.1.

    Note that after finalization, it will no longer be possible to perform a downgrade to v20.2. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from a backup created prior to performing the upgrade.

    Finalization only applies when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) can always be downgraded.

  6. To check the status of the rolling upgrade, run kubectl get pods. The pods are restarted one at a time with the new image.

  7. Verify that all pods have been upgraded by running:

    icon/buttons/copy
    $ kubectl get pods \
    -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
    

    You can also check the CockroachDB version of each node in the DB Console.

  8. Monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day).

    If you decide to roll back the upgrade, revert the image name in the custom resource and apply the new value.

    Note:

    This is only possible when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) are auto-finalized.

    To finalize the upgrade, re-enable auto-finalization:

    1. Start the CockroachDB built-in SQL client. For example, if you followed the steps in Deploy CockroachDB with Kubernetes to launch a secure client pod, get a shell into the cockroachdb-client-secure pod:

      icon/buttons/copy
      $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
      --certs-dir=/cockroach/cockroach-certs \
      --host={cluster-name}-public
      
    2. Re-enable auto-finalization:

      icon/buttons/copy
      > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
      
    3. Exit the SQL shell and pod:

      icon/buttons/copy
      > \q
      
  1. Verify that you can upgrade.

    To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production release and not a testing release (alpha/beta).

    Therefore, in order to upgrade to v21.1, you must be on a production release of v20.2.

    1. First upgrade to a production release of v20.2. Be sure to complete all the steps.
    2. Then return to this page and perform a second upgrade to v21.1.
    3. If you are upgrading from any production release of v20.2, or from any earlier v21.1 release, you do not have to go through intermediate releases; continue to step 2.
  2. Verify the overall health of your cluster using the DB Console. On the Overview:

    • Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
    • Under Replication Status, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.
    • In the Node List:
      • Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over.
      • Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to Metrics > Dashboard: Hardware and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider adding nodes to your cluster before beginning your upgrade.
  3. Review the backward-incompatible changes in v21.1 and deprecated features. If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.1.

  4. Decide how the upgrade will be finalized.

    By default, after all nodes are running the new version, the upgrade process will be auto-finalized. This will enable certain features and performance improvements introduced in v21.1. After finalization, however, it will no longer be possible to perform a downgrade to v20.2. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a backup created prior to the upgrade. For this reason, we recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step.

    Note:

    Finalization only applies when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) can always be downgraded.

    1. Start the CockroachDB built-in SQL client. For example, if you followed the steps in Deploy CockroachDB with Kubernetes to launch a secure client pod, get a shell into the cockroachdb-client-secure pod:

      icon/buttons/copy
      $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
      --certs-dir=/cockroach-certs \
      --host=cockroachdb-public
      
    2. Set the cluster.preserve_downgrade_option cluster setting to the version you are upgrading from:

      icon/buttons/copy
      > SET CLUSTER SETTING cluster.preserve_downgrade_option = '20.2';
      
    3. Exit the SQL shell and delete the temporary pod:

      icon/buttons/copy
      > \q
      
  5. Add a partition to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., cockroachdb-0, cockroachdb-1, cockroachdb-2) the partition value should be 2:

    icon/buttons/copy
    $ kubectl patch statefulset cockroachdb \
    -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
    
    statefulset.apps/cockroachdb patched
    
  6. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet:

    icon/buttons/copy
    $ kubectl patch statefulset cockroachdb \
    --type='json' \
    -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:v21.1.21"}]'
    
    statefulset.apps/cockroachdb patched
    
  7. Check the status of your cluster's pods. You should see one of them being restarted:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME            READY     STATUS        RESTARTS   AGE
    cockroachdb-0   1/1       Running       0          2m
    cockroachdb-1   1/1       Running       0          2m
    cockroachdb-2   0/1       Terminating   0          1m
    ...
    
  8. After the pod has been restarted with the new image, start the CockroachDB built-in SQL client:

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \
    --certs-dir=/cockroach-certs \
    --host=cockroachdb-public
    
  9. Run the following SQL query to verify that the number of under-replicated ranges is zero:

    icon/buttons/copy
    SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status;
    
      ranges_underreplicated
    --------------------------
                           0
    (1 row)        
    

    This indicates that it is safe to proceed to the next pod.

  10. Exit the SQL shell:

    icon/buttons/copy
    > \q
    
  11. Decrement the partition value by 1 to allow the next pod in the cluster to update:

    icon/buttons/copy
    $ kubectl patch statefulset cockroachdb \
    -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}'
    
    statefulset.apps/cockroachdb patched
    
  12. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be 0).

  13. Check the image of each pod to confirm that all have been upgraded:

    icon/buttons/copy
    $ kubectl get pods \
    -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
    
    cockroachdb-0   cockroachdb/cockroach:v21.1.21
    cockroachdb-1   cockroachdb/cockroach:v21.1.21
    cockroachdb-2   cockroachdb/cockroach:v21.1.21
    ...
    

    You can also check the CockroachDB version of each node in the DB Console.

  14. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day).

    If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.

    Note:

    This is only possible when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) are auto-finalized.

    To finalize the upgrade, re-enable auto-finalization:

    1. Start the CockroachDB built-in SQL client:

      icon/buttons/copy
      $ kubectl exec -it cockroachdb-client-secure \
      -- ./cockroach sql \
      --certs-dir=/cockroach-certs \
      --host=cockroachdb-public
      
    2. Re-enable auto-finalization:

      icon/buttons/copy
      > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
      
    3. Exit the SQL shell and delete the temporary pod:

      icon/buttons/copy
      > \q
      
  1. Verify that you can upgrade.

    To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production release and not a testing release (alpha/beta).

    Therefore, in order to upgrade to v21.1, you must be on a production release of v20.2.

    1. If you are upgrading to v21.1 from a production release earlier than v20.2, or from a testing release (alpha/beta), first upgrade to a production release of v20.2. Be sure to complete all the steps.
    2. Then return to this page and perform a second upgrade to v21.1.
    3. If you are upgrading from any production release of v20.2, or from any earlier v21.1 release, you do not have to go through intermediate releases; continue to step 2.
  2. Verify the overall health of your cluster using the DB Console. On the Overview:

    • Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
    • Under Replication Status, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.
    • In the Node List:
      • Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over.
      • Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to Metrics > Dashboard: Hardware and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider adding nodes to your cluster before beginning your upgrade.
  3. Review the backward-incompatible changes in v21.1 and deprecated features. If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.1.

  4. Decide how the upgrade will be finalized.

    By default, after all nodes are running the new version, the upgrade process will be auto-finalized. This will enable certain features and performance improvements introduced in v21.1. After finalization, however, it will no longer be possible to perform a downgrade to v20.2. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a backup created prior to the upgrade. For this reason, we recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step.

    Note:

    Finalization only applies when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) can always be downgraded.

    1. Get a shell into the pod with the cockroach binary created earlier and start the CockroachDB built-in SQL client:

      icon/buttons/copy
      $ kubectl exec -it cockroachdb-client-secure \
      -- ./cockroach sql \
      --certs-dir=/cockroach-certs \
      --host=my-release-cockroachdb-public
      
    2. Set the cluster.preserve_downgrade_option cluster setting to the version you are upgrading from:

      icon/buttons/copy
      > SET CLUSTER SETTING cluster.preserve_downgrade_option = '20.2';
      
    3. Exit the SQL shell and delete the temporary pod:

      icon/buttons/copy
      > \q
      
  5. Add a partition to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., cockroachdb-0, cockroachdb-1, cockroachdb-2) the partition value should be 2:

    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set statefulset.updateStrategy.rollingUpdate.partition=2
    
  6. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet:

    Note:

    For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed.

    icon/buttons/copy
    $ kubectl delete job my-release-cockroachdb-init
    
    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set image.tag=v21.1.21 \
    --reuse-values
    
  7. Check the status of your cluster's pods. You should see one of them being restarted:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                                READY     STATUS              RESTARTS   AGE
    my-release-cockroachdb-0            1/1       Running             0          2m
    my-release-cockroachdb-1            1/1       Running             0          3m
    my-release-cockroachdb-2            0/1       ContainerCreating   0          25s
    my-release-cockroachdb-init-nwjkh   0/1       ContainerCreating   0          6s
    ...
    
    Note:

    Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster.

  8. After the pod has been restarted with the new image, start the CockroachDB built-in SQL client:

    icon/buttons/copy
    $ kubectl exec -it cockroachdb-client-secure \
    -- ./cockroach sql \
    --certs-dir=/cockroach-certs \
    --host=my-release-cockroachdb-public
    
  9. Run the following SQL query to verify that the number of underreplicated ranges is zero:

    icon/buttons/copy
    SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status;
    
      ranges_underreplicated
    --------------------------
                           0
    (1 row)        
    

    This indicates that it is safe to proceed to the next pod.

  10. Exit the SQL shell:

    icon/buttons/copy
    > \q
    
  11. Decrement the partition value by 1 to allow the next pod in the cluster to update:

    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set statefulset.updateStrategy.rollingUpdate.partition=1 \
    
  12. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be 0).

  13. Check the image of each pod to confirm that all have been upgraded:

    icon/buttons/copy
    $ kubectl get pods \
    -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
    
    my-release-cockroachdb-0    cockroachdb/cockroach:v21.1.21
    my-release-cockroachdb-1    cockroachdb/cockroach:v21.1.21
    my-release-cockroachdb-2    cockroachdb/cockroach:v21.1.21
    ...
    

    You can also check the CockroachDB version of each node in the DB Console.

  14. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day).

    If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.

    Note:

    This is only possible when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) are auto-finalized.

    To finalize the upgrade, re-enable auto-finalization:

    1. Get a shell into the pod with the cockroach binary created earlier and start the CockroachDB built-in SQL client:

      icon/buttons/copy
      $ kubectl exec -it cockroachdb-client-secure \
      -- ./cockroach sql \
      --certs-dir=/cockroach-certs \
      --host=my-release-cockroachdb-public
      
    2. Re-enable auto-finalization:

      icon/buttons/copy
      > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
      
    3. Exit the SQL shell and delete the temporary pod:

      icon/buttons/copy
      > \q
      

See also


Yes No
On this page

Yes No