Create a CockroachDB Dedicated Cluster

This page walks you through the process of creating a CockroachDB Dedicated cluster. Note that only CockroachDB Cloud Console Administrators can create clusters. If you are a Developer and need to create a cluster, contact your CockroachDB Cloud Administrator.

Tip:

To create and connect to a 30-day free CockroachDB Dedicated cluster and run your first query, see the Quickstart.

Step 1. Start the cluster creation process

  1. If you haven't already, sign up for a CockroachDB Cloud account.
  2. Log in to your CockroachDB Cloud account.
  3. If there are multiple organizations in your account, select the correct organization in the top right corner.
  4. On the Overview page, click Create Cluster.
  5. Selected the Dedicated plan.

Step 2. Select the cloud provider

In the Cloud provider section, select either Google Cloud or AWS as your preferred cloud provider.

CockroachDB Cloud GCP clusters use N2 standard machine types and Persistent Disk storage. AWS clusters use M5 instance types and Elastic Block Store (EBS). The IOPS associated with each node size in GCP is equal to 30 times the storage size, and the IOPS for AWS nodes is equal to 15 times the storage size.

Note:

If you created a CockroachDB Dedicated cluster before December 1, 2021, your cluster may have a different machine type, IOPS, and pricing. Your cluster will be transitioned to the current hardware configuration by the end of the month.

The choice of the cloud provider decides the price per node. For pricing comparison, refer to the following table:

Hardware configuration GCP Pricing (per node, per hour) AWS Pricing (per node, per hour)
Option 1 (2 vCPU, 60 GiB disk) $0.50 $0.55
Option 2 (4 vCPU, 150 GiB disk) $0.89 $1.02
Option 3 (8 vCPU, 500 GiB disk) $1.78 $2.00
Option 4 (16 vCPU, 900 GiB disk) $3.83 $3.83

CockroachDB Cloud does not charge you for data transfer costs.

Step 3. Select the region(s)

In the Regions & nodes section, select a region. For optimal performance, select the cloud provider region in which you are running your application. For example, if your application is deployed in GCP's us-east1 region, select us-east1 for your CockroachDB Dedicated cluster.

To create a multi-region cluster, click Add regions until you have the desired number of regions.

Note:

Multi-region clusters must contain at least 3 regions to ensure that data spread across regions can survive the loss of one region. See Planning your cluster for more information about our requirements and recommendations for cluster configuration.

Known issue: We had to temporarily disable the following GCP regions due to GCP's quota restrictions:

  • Mumbai (asia-south1)
  • Osaka (asia-northeast2)
  • Hamina (europe-north1)
  • Frankfurt (europe-west3)
  • Zurich (europe-west6)

If you want to create a cluster in a disabled region, please contact Support.

Step 4. Select the number of nodes

In the Regions & nodes section, select the number of nodes.

  • For single-region application development and testing, you may create a 1-node cluster.
  • For single-region production deployments, we recommend a minimum of 3 nodes. The number of nodes also depends on your storage capacity and performance requirements. See Example for further guidance.
  • For multi-region deployments, we require a minimum of 3 nodes per region. For best performance and stability, you should use the same number of nodes in each region.
  • See Planning your cluster for more information about our requirements and recommendations for cluster configuration.
Note:

At this time, you cannot add nodes to a single-node cluster once it is created.

Currently, you can add a maximum of 150 nodes to your cluster. For larger configurations, contact us.

Step 5. Select the hardware per node

The choice of hardware per node determines the cost, throughput, and performance characteristics of your cluster. To select the hardware configuration, consider the following factors:

Factor Description
Capacity Total raw data size you expect to store without replication.
Replication The default replication factor for a CockroachDB Cloud cluster is 3.
Buffer Additional buffer (overhead data, accounting for data growth, etc.). If you are importing an existing dataset, we recommend you provision at least 50% additional storage to account for the import functionality.
Compression The percentage of savings you can expect to achieve with compression. With CockroachDB's default compression algorithm, we typically see about a 40% savings on raw data size.
Transactions per second Each vCPU can handle around 1000 transactions per second. Hence an Option 1 node (2vCPUs) can handle 2000 transactions per second and an Option 2 node (4vCPUs) can handle 4000 transactions per second. If you need more than 4000 transactions per second per node, contact us.
Tip:

When scaling up your cluster, it is generally more effective to increase node size up to 16 vCPUs before adding more nodes. For most production applications, we recommend at least 4 to 8 vCPUs per node.

For more detailed disk performance numbers, see the relevant GCP and AWS documentation.

To change the hardware configuration after the cluster is created, contact Support.

See Example for further guidance.

Step 6. Name the cluster

The cluster name must be 6-20 characters in length, and can include lowercase letters, numbers, and dashes (but no leading or trailing dashes).

Click Next. Optionally, you can enable VPC peering for your cluster.

Step 7. Enable VPC Peering (optional)

VPC peering is only available for GCP clusters. For AWS clusters, you can set up AWS PrivateLink after creating your cluster.

Note:

If you have multiple clusters, you will have to create a new VPC Peering or AWS PrivateLink connection for each cluster.

You can use VPC peering to connect your GCP application to the CockroachDB Cloud cluster. To enable VPC peering:

  1. Under Additional Settings, toggle the VPC Peering switch to Yes.
  2. Configure the IP address range and size (in CIDR format) for the CockroachDB Cloud network based on the following considerations:

    • As per GCP's overlapping subnets restriction, configure an IP range that doesn't overlap with the IP ranges in your application network.
    • The IP range and size cannot be changed after the cluster is created. Configuring a smaller IP range size may limit your ability to expand into multiple regions in the future. We recommend configuring an IP range size of /16 or lower.

      Alternatively, you can use CockroachDB Cloud's default IP range and size (172.28.0.0/14) as long as it doesn't overlap with the IP ranges in your network.

      To use the default IP range, select Use the default IP range. To configure your own IP range, select Configure the IP range and enter the IP range and size in CIDR format.

      Note:

      Custom IP ranges are temporarily unavailable for multi-region clusters.

  3. Click Next.

Step 8. Enter billing details

  1. On the Summary page, verify your selections for the cloud provider, region(s), number of nodes, and the hardware configuration per node.
  2. Verify the hourly estimated cost for the cluster.
    Note:
    The cost displayed does not include taxes.
    You will be billed monthly.
  3. Add your preferred payment method.
  4. If applicable, the 30-day trial code is pre-applied to your cluster.
    Note:
    Make sure that you delete your trial cluster before the trial expires. Your credit card will be charged after the trial ends. You can check the validity of the code on the Billing page.
  5. Click Create cluster.

Your cluster will be created in approximately 20-30 minutes.

Example

Let's say we want to create a cluster to connect with an application with a requirement of 2000 TPS that is running on the Google Cloud Platform in the us-east1 region.

Suppose the raw data amount we expect to store without replication is 500 GB. At 40% Compression, we can expect a savings of 200 GB. Then the amount of data we need to store is 300 GB.

Let's consider a storage buffer of 50% to account for overhead and data growth. Then net raw data amount to be stored is 450 GB.

With the default replication factor of 3, the total amount of data stored is (3 * 450GB) = 1350 GB.

To determine the number of nodes and the hardware configuration to store 1350 GB of data, refer to the table in Step 2. We can see that the best option to store 1350 GB of data is 9 Option 2 nodes.

Let's verify if 9 Option 2 nodes meet our performance requirements of 2000 TPS. 9 Option 2 nodes have (9*4) = 36 vCPUs. Since each vCPU can handle around 1000 TPS, 9 Option 2 nodes can meet our performance requirements.

Thus our final configuration is as follows:

Component Selection
Cloud provider GCP
Region us-east1
Number of nodes 9
Size Option 2

What's next

To start using your CockroachDB Cloud cluster, see the following pages:

If you created a multi-region cluster, it is important to carefully choose:

Not doing so can result in unexpected latency and resiliency. For more information, see the Multi-Region Capabilities Overview.

YesYes NoNo