This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Digital Ocean, using Digital Ocean's managed load balancing service to distribute client traffic.

If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.

Requirements

  • Locally, you must have CockroachDB installed, which you’ll use to generate and manage your deployment’s certificates.

  • In Digitial Ocean, you must have SSH access to each Droplet with root or sudo privileges. This is necessary for distributing binaries and starting CockroachDB.

Recommendations

  • For guidance on cluster topology, clock synchronization, and file descriptor limits, see Recommended Production Settings.

  • Set up your Droplets using private networking.

  • Decide how you want to access your Admin UI:

    • Only from specific IP addresses, which requires you to set firewall rules to allow communication on port 8080 (documented on this page).
    • Using an SSH tunnel, which requires you to use --http-host=localhost when starting your nodes.

Step 1. Create Droplets

Create Droplets with private networking for each node you plan to have in your cluster. We recommend:

  • Running at least 3 nodes to ensure survivability.
  • Selecting the same continent for all of your Droplets for best performance.

Step 2. Set up load balancing

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use TCP load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

Digital Ocean offers fully-managed load balancers to distribute traffic between Droplets.

  1. Create a Digital Ocean Load Balancer. Be sure to:
    • Set forwarding rules to route TCP traffic from the load balancer's port 26257 to port 26257 on the node Droplets.
    • Configure health checks to use HTTP port 8080 and path /health.
  2. Note the provisioned IP Address for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster.
If you would prefer to use HAProxy instead of Digital Ocean's managed load balancing, see Manual Deployment for guidance.

Step 3. Configure your network

Set up a firewall for each of your Droplets, allowing TCP communication on the following two ports:

  • 26257 (tcp:26257) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes
  • 8080 (tcp:8080) for exposing your Admin UI

For guidance, you can use Digital Ocean's guide to configuring firewalls based on the Droplet's OS:

Step 4. Generate certificates

Locally, you'll need to create the following certificates and keys:

  • A certificate authority (CA) key pair (ca.crt and ca.key)
  • A client key pair for the root user
  • A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP address provisioned for the Digital Ocean Load Balancer.
Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.
  1. Create a certs directory and a safe directory to keep your CA key:

    $ mkdir certs
    $ mkdir my-safe-directory
    
  2. Create the CA key pair:

    $ cockroach cert create-ca \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  3. Create a client key pair for the root user:

    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to addresses provisioned for the Digital Ocean Load Balancer:

- <node internal IP address>, which is the node Droplet's Private IP. - <node external IP address>, which is the node Droplet's ipv4 address. - <node hostname>, which is the node Droplet's Name. - <other common names for node>, which include any domain names you point to the node Droplet. - localhost and 127.0.0.1 - <load balancer IP address>, which is the Digital Ocean Load Balancer's provisioned IP Address. - <load balancer hostname>, which is the Digital Ocean Load Balancer's Name.

~~~ shell $ cockroach cert create-node \ \ \ \ \ localhost \ 127.0.0.1 \ \ \ --certs-dir=certs \ --ca-key=my-safe-directory/ca.key ~~~

  1. Upload the certificates to the first node:

    # Create the certs directory:
    $ ssh <username>@<node1 external IP address> "mkdir certs"
    
    # Upload the CA certificate, client (root) certificate and key, and node certificate and key:
    $ scp certs/ca.crt \
    certs/client.root.crt \
    certs/client.root.key \
    certs/node.crt \
    certs/node.key \
    <username>@<node1 external IP address>:~/certs
    
  2. Create the certificate and key for the second node, using the --overwrite flag to replace the files created for the first node:

    $ cockroach cert create-node --overwrite\
    <node2 internal IP address> \
    <node2 external IP address> \
    <node2 hostname>  \
    <other common names for node2> \
    localhost \
    127.0.0.1 \
    <load balancer IP address> \
    <load balancer hostname> \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  3. Upload the certificates to the second node:

    # Create the certs directory:
    $ ssh <username>@<node2 external IP address> "mkdir certs"
    
    # Upload the CA certificate, client (root) certificate and key, and node certificate and key:
    $ scp certs/ca.crt \
    certs/client.root.crt \
    certs/client.root.key \
    certs/node.crt \
    certs/node.key \
    <username>@<node2 external IP address>:~/certs
    
  4. Repeat steps 6 and 7 for each additional node.

Step 5. Start the first node

  1. SSH to your Droplet:

    $ ssh <username>@<node1 external IP address>
    
  2. Install the latest CockroachDB binary:

    # Get the latest CockroachDB tarball.
    $ wget https://binaries.cockroachdb.com/cockroach-v1.0.2.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xf cockroach-v1.0.2.linux-amd64.tgz  \
    --strip=1 cockroach-v1.0.2.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin
    
  3. Start a new CockroachDB cluster with a single node, specifying the location of certificates and the address at which other nodes can reach it:

    $ cockroach start --background \
    --certs-dir=certs \
    --advertise-host=<node1 internal IP address>
    

Step 6. Add nodes to the cluster

At this point, your cluster is live and operational but contains only a single node. Next, scale your cluster by setting up additional nodes that will join the cluster.

  1. SSH to your Droplet:

    $ ssh <username>@<additional node external IP address>
    
  2. Install the latest CockroachDB binary:

    # Get the latest CockroachDB tarball.
    $ wget https://binaries.cockroachdb.com/cockroach-v1.0.2.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xf cockroach-v1.0.2.linux-amd64.tgz  \
    --strip=1 cockroach-v1.0.2.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin
    
  3. Start a new node that joins the cluster using the first node's internal IP address:

    $ cockroach start --background  \
    --certs-dir=certs \
    --advertise-host=<node internal IP address> \
    --join=<node1 internal IP address>:26257
    
  4. Repeat these steps for each Droplet you want to use as a node.

Step 7. Test your cluster

CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.

To test this, use the built-in SQL client as follows:

  1. SSH to your first node:

    $ ssh <username>@<node1 external IP address>
    
  2. Launch the built-in SQL client and create a database:

    $ cockroach sql \
    --certs-dir=certs
    
    > CREATE DATABASE securenodetest;
    
  3. In another terminal window, SSH to another node:

    $ ssh <username>@<node3 external IP address>
    
  4. Launch the built-in SQL client:

    $ cockroach sql \
    --certs-dir=certs
    
  5. View the cluster's databases, which will include securenodetest:

    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | securenodetest     |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    
  6. Use CTRL + D, CTRL + C, or \q to exit the SQL shell.

Step 8. Test load balancing

The Digital Ocean Load Balancer created in step 2 can serve as the client gateway to the cluster. Instead of connecting directly to a CockroachDB node, clients can connect to the load balancer, which will then redirect the connection to a CockroachDB node.

To test this, use the built-in SQL client locally as follows:

  1. On your local machine, launch the built-in SQL client, with the --host flag set to the load balancer's IP address and security flags pointing to the CA cert and the client cert and key:

    $ cockroach sql \
    --certs-dir=certs \
    --host=<load balancer IP address>
    
  2. View the cluster's databases:

    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | securenodetest     |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    

    As you can see, the load balancer redirected the query to one of the CockroachDB nodes.

  3. Check which node you were redirected to:

    > SELECT node_id FROM crdb_internal.node_build_info LIMIT 1;
    
    +---------+
    | node_id |
    +---------+
    |       3 |
    +---------+
    (1 row)
    
  4. Use CTRL + D, CTRL + C, or \q to exit the SQL shell.

Step 9. Monitor the cluster

View your cluster's Admin UI by going to https://<any node's external IP address>:8080.

Note that your browser will consider the CockroachDB-created certificate invalid; you’ll need to click through a warning message to get to the UI.

On this page, verify that the cluster is running as expected:

  1. Click View nodes list on the right to ensure that all of your nodes successfully joined the cluster.

    Also check the Replicas column. If you have nodes with 0 replicas, it's possible you didn't properly set the --advertise-host flag to the Droplet's internal IP address. This prevents the node from receiving replicas and working as part of the cluster.

  2. Click the Databases tab on the left to verify that securenodetest is listed.

You can also use Prometheus and other third-party, open source tools to monitor and visualize cluster metrics and send notifications based on specified rules. For more details, see Monitor CockroachDB with Prometheus.

Step 10. Use the database

Now that your deployment is working, you can:

  1. Implement your data model.
  2. Create users and grant them privileges.
  3. Connect your application. Be sure to connect your application to the Digital Ocean Load Balancer, not to a CockroachDB node.

See Also



Yes No