This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic.

Warning:
If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.

Requirements

  • You must have SSH access to each machine. This is necessary for distributing and starting CockroachDB binaries.

  • Your network configuration must allow TCP communication on the following ports:

    • 26257 for intra-cluster and client-cluster communication
    • 8080 to expose your Admin UI

Recommendations

  • If you plan to use CockroachDB in production, carefully review the Production Checklist.

  • Consider using a secure cluster instead. Using an insecure cluster comes with risks:

    • Your cluster is open to any client that can access any node's IP addresses.
    • Any user, even root, can log in without providing a password.
    • Any user, connecting as root, can read or write any data in your cluster.
    • There is no network encryption or authentication, and thus no confidentiality.
  • Decide how you want to access your Admin UI:

    Access Level Description
    Partially open Set a firewall rule to allow only specific IP addresses to communicate on port 8080.
    Completely open Set a firewall rule to allow all IP addresses to communicate on port 8080.
    Completely closed Set a firewall rule to disallow all communication on port 8080. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.

Step 1. Configure your network

CockroachDB requires TCP communication on two ports:

  • 26257 (tcp:26257) for inter-node communication (i.e., working as a cluster)
  • 8080 (tcp:8080) for exposing your Admin UI

Inter-node communication works by default using your GCE instances' internal IP addresses, which allow communication with other instances on CockroachDB's default port 26257. However, to expose your admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, you need to create firewall rules for your project.

Creating Firewall Rules

When creating firewall rules, we recommend using Google Cloud Platform's tag feature, which lets you specify that you want to apply the rule only to instance that include the same tag.

Admin UI

Field Recommended Value
Name cockroachadmin
Source filter IP ranges
Source IP ranges Your local network's IP ranges
Allowed protocols... tcp:8080
Target tags cockroachdb

Application Data

Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in Step 4.

Warning:
When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it can't be used across regions. You might also consider using the HAProxy load balancer (see Manual Deployment for guidance).

Step 2. Create instances

Create an instance for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload.

  • Run at least 3 nodes to ensure survivability.

  • Use n1-standard or n1-highcpu predefined VMs, or custom VMs, with Local SSDs or SSD persistent disks. For example, Cockroach Labs has used custom VMs (8 vCPUs and 16 GiB of RAM per VM) for internal testing.

  • Do not use f1 or g1 shared-core machines, which limit the load on a single core.

  • If you used a tag for your firewall rules, when you create the instance, select Management, disk, networking, SSH keys. Then on the Networking tab, in the Network tags field, enter cockroachdb.

For more details, see Hardware Recommendations and Cluster Topology.

Step 3. Synchronize clocks

CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of synch with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.

Compute Engine instances are preconfigured to use NTP, which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as pool.ntp.org, will handle the leap second. Therefore, you should:

Step 4. Set up TCP Proxy Load Balancing

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

GCE offers fully-managed TCP Proxy Load Balancing. This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user.

Warning:
When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it can't be used across regions. You might also consider using the HAProxy load balancer (see the On-Premises tutorial for guidance).

To use GCE's TCP Proxy Load Balancing service:

  1. For each zone in which you're running an instance, create a distinct instance group.
    • To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with tcp26257 as the Port name and 26257 as the Port number.
  2. Add the relevant instances to each instance group.
  3. Configure Proxy Load Balancing.
    • During backend configuration, create a health check, setting the Protocol to HTTP, the Port to 8080, and the Request path to path /health?ready=1. This health endpoint ensures that load balancers don't direct traffic to nodes that are live but not ready to receive requests.
      • If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly.
    • During frontend configuration, reserve a static IP address and choose a port. Note this address/port combination, as you'll use it for all of you client connections.
  4. Create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud.
    • Be sure to set Source IP ranges to 130.211.0.0/22 and 35.191.0.0/16 and set Target tags to cockroachdb (not to the value specified in the linked instructions).

Step 5. Start nodes

For each initial node of your cluster, complete the following steps:

Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
  1. SSH to the machine where you want the node to run.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    copy
    icon/buttons/copy
    $ wget -qO- https://binaries.cockroachdb.com/cockroach-v2.0.3.linux-amd64.tgz \
    | tar  xvz
    
  3. Copy the binary into the PATH:

    copy
    icon/buttons/copy
    $ cp -i cockroach-v2.0.3.linux-amd64/cockroach /usr/local/bin
    

    If you get a permissions error, prefix the command with sudo.

  4. Run the cockroach start command:

    copy
    icon/buttons/copy
    $ cockroach start --insecure \
    --host=<node1 address> \
    --locality=<key-value pairs> \
    --cache=.25 \
    --max-sql-memory=.25 \
    --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 \
    --background
    

    This command primes the node to start, using the following flags:

    Flag Description
    --insecure Indicates that the cluster is insecure, with no network encryption or authentication.
    --host Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

    If you want the node to listen on multiple interfaces, leave --host out.

    If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave --host out and set the --advertise-host flag to the internal address.
    --locality Key-value pairs that describe the location of the node, e.g., country, region, datacenter, rack, etc. It is recommended to set --locality when deploying across multiple datacenters or when there is otherwise high latency between nodes. It is also required to use certain enterprise features. For more details, see Locality.
    --cache
    --max-sql-memory
    Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing (see Recommended Production Settings for more details).
    --join Identifies the address and port of 3-5 of the initial nodes of the cluster.
    --background Starts the node in the background so you gain control of the terminal to issue more commands.

    For other flags not explicitly set, the command uses default values. For example, the node stores data in --store=cockroach-data, binds internal and client communication to --port=26257, and binds Admin UI HTTP requests to --http-port=8080. To set these options manually, see Start a Node.

  5. Repeat these steps for each addition node that you want in your cluster.

Step 6. Initialize the cluster

On your local machine, complete the node startup process and have them join together as a cluster:

  1. Install CockroachDB on your local machine, if you haven't already.

  2. Run the cockroach init command, with the --host flag set to the address of any node:

    copy
    icon/buttons/copy
    $ cockroach init --insecure --host=<address of any node>
    

    Each node then prints helpful details to the standard output, such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.

Step 7. Test the cluster

CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.

To test this, use the built-in SQL client locally as follows:

  1. On your local machine, launch the built-in SQL client, with the --host flag set to the address of any node:

    copy
    icon/buttons/copy
    $ cockroach sql --insecure --host=<address of any node>
    
  2. Create an insecurenodetest database:

    copy
    icon/buttons/copy
    > CREATE DATABASE insecurenodetest;
    
  3. Use \q or ctrl-d to exit the SQL shell.

  4. Launch the built-in SQL client, with the --host flag set to the address of a different node:

    copy
    icon/buttons/copy
    $ cockroach sql --insecure --host=<address of different node>
    
  5. View the cluster's databases, which will include insecurenodetest:

    copy
    icon/buttons/copy
    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | insecurenodetest   |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    
  6. Use \q or ctrl-d to exit the SQL shell.

Step 8. Run a sample workload

CockroachDB offers a pre-built workload binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the TPC-C workload.

Tip:
For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.
  1. SSH to the machine where you want the run the sample TPC-C workload.

    This should be a machine that is not running a CockroachDB node.

  2. Download workload and make it executable:

    copy
    icon/buttons/copy
    $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST | chmod 755 workload.LATEST
    
  3. Rename and copy workload into the PATH:

    copy
    icon/buttons/copy
    $ cp -i workload.LATEST /usr/local/bin/workload
    
  4. Start the TPC-C workload, pointing it at the IP address of the load balancer:

    copy
    icon/buttons/copy
    $ workload run tpcc \
    --drop \
    --init \
    --duration=20m \
    --tolerate-errors \
    "postgresql://[email protected]<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=disable"
    

    This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.

    Tip:
    For more tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.

  5. To monitor the load generator's progress, open the Admin UI by pointing a browser to the address in the admin field in the standard output of any node on startup.

    Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click Metrics on the left, select the SQL dashboard, and then check the SQL Connections graph. You can use the Graph menu to filter the graph for specific nodes.

Step 9. Set up monitoring and alerting

Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.

For details about available monitoring options and the most important events and metrics to alert on, see Monitoring and Alerting.

Step 10. Scale the cluster

For each additional node you want to add to the cluster, complete the following steps:

  1. SSH to the machine where you want the node to run.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    copy
    icon/buttons/copy
    $ wget -qO- https://binaries.cockroachdb.com/cockroach-v2.0.3.linux-amd64.tgz \
    | tar  xvz
    
  3. Copy the binary into the PATH:

    copy
    icon/buttons/copy
    $ cp -i cockroach-v2.0.3.linux-amd64/cockroach /usr/local/bin
    

    If you get a permissions error, prefix the command with sudo.

  4. Run the cockroach start command just like you did for the initial nodes:

    copy
    icon/buttons/copy
    $ cockroach start --insecure \
    --host=<node4 address> \
    --locality=<key-value pairs> \
    --cache=.25 \
    --max-sql-memory=.25 \
    --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 \
    --background
    
  5. Update your load balancer to recognize the new node.

Step 11. Use the cluster

Now that your deployment is working, you can:

  1. Implement your data model.
  2. Create users and grant them privileges.
  3. Connect your application. Be sure to connect your application to the GCE load balancer, not to a CockroachDB node.

See Also



Yes No