This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Amazon’s AWS EC2 platform.

If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.

Requirements

Locally, you must have CockroachDB installed, which you’ll use to generate and manage your deployment’s certificates.

In AWS, you must have SSH access (key pairs/SSH login) to each machine with root or sudo privileges. This is necessary for distributing binaries and starting CockroachDB.

Recommendations

  • All instances running CockroachDB should be members of the same Security Group.
  • Decide how you want to access your Admin UI:
    • Only from specific IP addresses, which requires you to set firewall rules to allow communication on port 8080 (documented on this page)
    • Using an SSH tunnel, which requires you to use --http-host=localhost when starting your nodes

For guidance on cluster topology, clock synchronization, and file descriptor limits, see Recommended Production Settings.

Step 1. Configure Your Network

CockroachDB requires TCP communication on two ports:

  • 26257 for inter-node communication (i.e., working as a cluster) and connecting with applications
  • 8080 for exposing your Admin UI

You can create these rules using Security Groups’ Inbound Rules.

Inter-node communication

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 26257
Source The name of your security group (e.g., sg-07ab277a)

Admin UI

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 8080
Source Your network’s IP ranges

Application Data

Field Recommended Value
Type Custom TCP Rules
Protocol TCP
Port Range 26257
Source Your application’s IP ranges

To connect your application to CockroachDB, use a PostgreSQL wire protocol driver.

Step 2. Create Instances

Create an instance for each node you plan to have in your cluster. We recommend:

  • Running at least 3 nodes to ensure survivability.
  • Selecting the same continent for all of your instances for best performance.

Step 3. Generate Your Certificates

Locally, you’ll need to create the following certificates and keys:

  • A certificate authority (CA) key pair (ca.cert and ca.key)
  • A client key pair for the root user
  • A node key pair for each node, issued to its IP addresses and any common names the machine uses
Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.
  1. Create a certs directory:

    $ mkdir certs
    
  2. Create the CA key pair:

    $ cockroach cert create-ca \
    --ca-cert=certs/ca.cert \
    --ca-key=certs/ca.key
    
  3. Create a client key pair for the root user:

    $ cockroach cert create-client \
    root \
    --ca-cert=certs/ca.cert \
    --ca-key=certs/ca.key \
    --cert=certs/root.cert \
    --key=certs/root.key
    
  4. For each node, a create a node key pair issued for all common names you might use to refer to the node, including:

    • <node internal IP address> which is the instance’s Internal IP.
    • <node external IP address> which is the instance’s External IP address.
    • <node hostname> which is the instance’s hostname. You can find this by SSHing into a server and running hostname. For many AWS EC2 servers, this is ip- followed by the internal IP address delimited by dashes; e.g., ip-172-31-18-168.
    • <other common names for node> which include any domain names you point to the instance.
    • localhost and 127.0.0.1
    $ cockroach cert create-node \
    <node internal IP address> \
    <node external IP address> \
    <node hostname>  \
    <other common names for node> \
    localhost \
    127.0.0.1 \
    --ca-cert=certs/ca.cert \
    --ca-key=certs/ca.key \
    --cert=certs/<node name>.cert \
    --key=certs/<node name>.key
    
  5. Upload the certificates to each node:

    # Create the certs directory:
    $ ssh -i <path to AWS .pem> <username>@<node external IP address> "mkdir certs"
    
    # Upload the CA certificate, client (root) certificate and key, and node certificate and key:
    $ scp -i <path to AWS .pem>\
    certs/ca.cert \
    certs/root.cert \
    certs/root.key \
    certs/<node name>.cert \
    certs/<node name>.key \
    <username>@<node external IP address>:~/certs
    

Step 4. Set up the First Node

  1. SSH to your instance:

    $ ssh -i <path to AWS .pem> <username>@<node1 external IP address>
    
  2. Install the latest CockroachDB binary:

    # Get the latest CockroachDB tarball.
    $ wget https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xf cockroach-latest.linux-amd64.tgz  \
    --strip=1 cockroach-latest.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin
    
  3. Start a new CockroachDB cluster with a single node, specifying the location of certificates and the address at which other nodes can reach it:

    $ cockroach start --background \
    --ca-cert=certs/ca.cert \
    --cert=certs/<node1 name>.cert \
    --key=certs/<node1 name>.key \
    --advertise-host=<node1 internal IP address>
    

At this point, your cluster is live and operational but contains only a single node. Next, scale your cluster by setting up additional nodes that will join the cluster.

Step 5. Set up Additional Nodes

  1. SSH to your instance:

    $ ssh -i <path to AWS .pem> <username>@<additional node external IP address>
    
  2. Install CockroachDB from our latest binary:

    # Get the latest CockroachDB tarball.
    $ wget https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xf cockroach-latest.linux-amd64.tgz  \
    --strip=1 cockroach-latest.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin
    
  3. Start a new node that joins the cluster using the first node’s internal IP address:

    $ cockroach start --background  \
    --ca-cert=certs/ca.cert \
    --cert=certs/<node name>.cert \
    --key=certs/<node name>.key \
    --advertise-host=<node internal IP address> \
    --join=<node1 internal IP address>:26257
    

Repeat these steps for each instance you want to use as a node.

Step 6. Test Your Cluster

To test your distributed, multi-node cluster, access SQL and create a new database. That database will then be accessible from all of the nodes in your cluster.

  1. SSH to your first node:

    $ ssh -i <path to AWS .pem> <username>@<node2 external IP address>
    
  2. Launch the built-in SQL client and create a database:

    $ cockroach sql --ca-cert=certs/ca.cert --cert=certs/root.cert --key=certs/root.key
    
    When issuing cockroach commands on secure clusters, you must include flags for the ca-cert, as well as the client's cert and key.
    > CREATE DATABASE securenodetest;
    
  3. In another terminal window, SSH to another node:

    $ ssh -i <path to AWS .pem> <username>@<node3 external IP address>
    
  4. Launch the built-in SQL client:

    $ cockroach sql --ca-cert=certs/ca.cert --cert=certs/root.cert --key=certs/root.key
    
  5. View the cluster’s databases, which will include securenodetest:

    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | securenodetest     |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    

Step 7. View the Admin UI

View your cluster’s Admin UI by going to http://<any node's external IP address>:8080.

On this page, go to the following tabs on the left:

  • Nodes to ensure all of your nodes successfully joined the cluster
  • Databases to ensure securenodetest is listed
You can also use Prometheus and other third-party, open source tools to monitor and visualize cluster metrics and send notifications based on specified rules. For more details, see Monitor CockroachDB with Prometheus.

Use the Database

Now that your deployment is working, you can:

  1. Implement your data model.
  2. Create users and grant them privileges.
  3. Connect your application.

See Also



Yes No