Deploy CockroachDB on AWS EC2

On this page Carat arrow pointing down
Warning:
CockroachDB v1.1 is no longer supported. For more details, see the Release Support Policy.

This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic.

If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.

Requirements

  • You must have CockroachDB installed locally. This is necessary for generating and managing your deployment's certificates.

  • You must have SSH access to each machine. This is necessary for distributing and starting CockroachDB binaries.

  • Your network configuration must allow TCP communication on the following ports:

    • 26257 for intra-cluster and client-cluster communication
    • 8080 to expose your Admin UI

Recommendations

  • If you plan to use CockroachDB in production, carefully review the Production Checklist.

  • Decide how you want to access your Admin UI:

    Access Level Description
    Partially open Set a firewall rule to allow only specific IP addresses to communicate on port 8080.
    Completely open Set a firewall rule to allow all IP addresses to communicate on port 8080.
    Completely closed Set a firewall rule to disallow all communication on port 8080. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
  • All instances running CockroachDB should be members of the same Security Group.

Step 1. Configure your network

CockroachDB requires TCP communication on two ports:

  • 26257 for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes
  • 8080 for exposing your Admin UI

You can create these rules using Security Groups' Inbound Rules.

Inter-node and load balancer-node communication

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 26257
Source The name of your security group (e.g., sg-07ab277a)

Admin UI

Field Recommended Value
Type Custom TCP Rule
Protocol TCP
Port Range 8080
Source Your network's IP ranges

Application data

Field Recommended Value
Type Custom TCP Rules
Protocol TCP
Port Range 26257
Source Your application's IP ranges

Step 2. Create instances

Create an instance for each node you plan to have in your cluster.

For more details, see Hardware Recommendations and Cluster Topology.

Step 3. Synchronize clocks

CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.

Amazon provides the Amazon Time Sync Service, which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.

Step 4. Set up load balancing

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

AWS offers fully-managed load balancing to distribute traffic between instances.

  1. Add AWS load balancing. Be sure to:
    • Set forwarding rules to route TCP traffic from the load balancer's port 26257 to port 26257 on the node Droplets.
    • Configure health checks to use HTTP port 8080 and path /health.
  2. Note the provisioned IP Address for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster.
Note:
If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.

Step 5. Generate certificates

You can use either cockroach cert commands or openssl commands to generate security certificates. This section features the cockroach cert commands.

Locally, you'll need to create the following certificates and keys:

  • A certificate authority (CA) key pair (ca.crt and ca.key).
  • A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
  • A client key pair for the root user. You'll use this to run a sample workload against the cluster as well as some cockroach client commands from your local machine.
Tip:
Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.
  1. Install CockroachDB on your local machine, if you haven't already.

  2. Create two directories:

    icon/buttons/copy
    $ mkdir certs
    
    icon/buttons/copy
    $ mkdir my-safe-directory
    
    • certs: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
    • my-safe-directory: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
  3. Create the CA certificate and key:

    icon/buttons/copy
    $ cockroach cert create-ca \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:

    icon/buttons/copy
    $ cockroach cert create-node \
    <node1 internal IP address> \
    <node1 external IP address> \
    <node1 hostname>  \
    <other common names for node1> \
    localhost \
    127.0.0.1 \
    <load balancer IP address> \
    <load balancer hostname>  \
    <other common names for load balancer instances> \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  5. Upload certificates to the first node:

    icon/buttons/copy
    # Create the certs directory:
    $ ssh <username>@<node1 address> "mkdir certs"
    
    icon/buttons/copy
    # Upload the CA certificate and node certificate and key:
    $ scp certs/ca.crt \
    certs/node.crt \
    certs/node.key \
    <username>@<node1 address>:~/certs
    
  6. Delete the local copy of the node certificate and key:

    icon/buttons/copy
    $ rm certs/node.crt certs/node.key
    
    Note:
    This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.
  7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:

    icon/buttons/copy
    $ cockroach cert create-node \
    <node2 internal IP address> \
    <node2 external IP address> \
    <node2 hostname>  \
    <other common names for node2> \
    localhost \
    127.0.0.1 \
    <load balancer IP address> \
    <load balancer hostname>  \
    <other common names for load balancer instances> \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  8. Upload certificates to the second node:

    icon/buttons/copy
    # Create the certs directory:
    $ ssh <username>@<node2 address> "mkdir certs"
    
    icon/buttons/copy
    # Upload the CA certificate and node certificate and key:
    $ scp certs/ca.crt \
    certs/node.crt \
    certs/node.key \
    <username>@<node2 address>:~/certs
    
  9. Repeat steps 6 - 8 for each additional node.

  10. Create a client certificate and key for the root user:

    icon/buttons/copy
    $ cockroach cert create-client \
    root \
    --certs-dir=certs \
    --ca-key=my-safe-directory/ca.key
    
  11. Upload certificates to the machine where you will run a sample workload:

    icon/buttons/copy
    # Create the certs directory:
    $ ssh <username>@<workload address> "mkdir certs"
    
    icon/buttons/copy
    # Upload the CA certificate and client certificate and key:
    $ scp certs/ca.crt \
    certs/client.root.crt \
    certs/client.root.key \
    <username>@<workload address>:~/certs
    

    In later steps, you'll also use the root user's certificate to run cockroach client commands from your local machine. If you might also want to run cockroach client commands directly on a node (e.g., for local debugging), you'll need to copy the root user's certificate and key to that node as well.

Step 6. Start nodes

You can start the nodes manually or automate the process using systemd.

For each initial node of your cluster, complete the following steps:

Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
  1. SSH to the machine where you want the node to run.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    icon/buttons/copy
    $ curl https://binaries.cockroachdb.com/cockroach-v1.1.9.linux-amd64.tgz \
    | tar -xz
    
  3. Copy the binary into the PATH:

    icon/buttons/copy
    $ cp -i cockroach-v1.1.9.linux-amd64/cockroach /usr/local/bin/
    

    If you get a permissions error, prefix the command with sudo.

  4. Run the cockroach start command:

    icon/buttons/copy
    $ cockroach start \
    --certs-dir=certs \
    --host=<node1 address> \
    --locality=<key-value pairs> \
    --cache=.25 \
    --max-sql-memory=.25 \
    --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 \
    --background
    

    This command primes the node to start, using the following flags:

    Flag Description
    --certs-dir Specifies the directory where you placed the ca.crt file and the node.crt and node.key files for the node.
    --host Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

    If you want the node to listen on multiple interfaces, leave --host out.

    If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave --host out and set the --advertise-host flag to the internal address.
    --locality Key-value pairs that describe the location of the node, e.g., country, region, datacenter, rack, etc. It is recommended to set --locality when deploying across multiple datacenters or when there is otherwise high latency between nodes. It is also required to use certain enterprise features. For more details, see Locality.
    --cache
    --max-sql-memory
    Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing (see Recommended Production Settings for more details).
    --join Identifies the address and port of 3-5 of the initial nodes of the cluster.
    --background Starts the node in the background so you gain control of the terminal to issue more commands.

    For other flags not explicitly set, the command uses default values. For example, the node stores data in --store=cockroach-data, binds internal and client communication to --port=26257, and binds Admin UI HTTP requests to --http-port=8080. To set these options manually, see Start a Node.

  5. Repeat these steps for each additional node that you want in your cluster.

For each initial node of your cluster, complete the following steps:

Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
  1. SSH to the machine where you want the node to run. Ensure you are logged in as the root user.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    icon/buttons/copy
    $ curl https://binaries.cockroachdb.com/cockroach-v1.1.9.linux-amd64.tgz \
    | tar -xz
    
  3. Copy the binary into the PATH:

    icon/buttons/copy
    $ cp -i cockroach-v1.1.9.linux-amd64/cockroach /usr/local/bin/
    

    If you get a permissions error, prefix the command with sudo.

  4. Create the Cockroach directory:

    icon/buttons/copy
    $ mkdir /var/lib/cockroach
    
  5. Create a Unix user named cockroach:

    icon/buttons/copy
    $ useradd cockroach
    
  6. Move the certs directory to the cockroach directory.

    icon/buttons/copy
    $ mv certs /var/lib/cockroach/
    
  7. Change the ownership of Cockroach directory to the user cockroach:

    icon/buttons/copy
    $ chown -R cockroach.cockroach /var/lib/cockroach
    
  8. Download the sample configuration template:

    icon/buttons/copy
    $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v1.1/prod-deployment/securecockroachdb.service
    

    Alternatively, you can create the file yourself and copy the script into it:

    icon/buttons/copy
    [Unit]
    Description=Cockroach Database cluster node
    Requires=network.target
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/cockroach
    ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 --cache=.25 --max-sql-memory=.25
    TimeoutStopSec=60
    Restart=always
    RestartSec=10
    StandardOutput=syslog
    StandardError=syslog
    SyslogIdentifier=cockroach
    User=cockroach
    [Install]
    WantedBy=default.target
    
    

    Save the file in the /etc/systemd/system/ directory.

  9. Customize the sample configuration template for your deployment:

    Specify values for the following flags in the sample configuration template:

    Flag Description
    --join Identifies the address and port of 3-5 of the initial nodes of the cluster.
    --host Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

    If you want the node to listen on multiple interfaces, leave --host empty.

    If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave --host empty and set the --advertise-host flag to the internal address.
  10. Start the CockroachDB cluster:

    icon/buttons/copy
    $ systemctl start securecockroachdb
    
  11. Repeat these steps for each additional node that you want in your cluster.

Note:

systemd handles node restarts in case of node failure. To stop a node without systemd restarting it, run systemctl stop securecockroachdb

Step 7. Initialize the cluster

On your local machine, run the cockroach init command to complete the node startup process and have them join together as a cluster:

icon/buttons/copy
$ cockroach init --certs-dir=certs --host=<address of any node>

This command requires the following flags:

Flag Description
--certs-dir Specifies the directory where you placed the ca.crt file and the client.root.crt and client.root.key files for the root user.
--host Specifies the address of any node in the cluster.

After running this command, each node prints helpful details to the standard output, such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.

Step 8. Test your cluster

CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.

To test this, use the built-in SQL client locally as follows:

  1. On your local machine, launch the built-in SQL client:

    icon/buttons/copy
    $ cockroach sql --certs-dir=certs --host=<address of any node>
    

    This command requires the following flags:

    Flag Description
    --certs-dir Specifies the directory where you placed the ca.crt file and the client.root.crt and client.root.key files for the root user.
    --host Specifies the address of any node in the cluster.
  2. Create a securenodetest database:

    icon/buttons/copy
    > CREATE DATABASE securenodetest;
    
  3. Use \q or CTRL-C to exit the SQL shell.

  4. Launch the built-in SQL client against a different node:

    icon/buttons/copy
    $ cockroach sql --certs-dir=certs --host=<address of different node>
    
  5. View the cluster's databases, which will include securenodetest:

    icon/buttons/copy
    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | securenodetest     |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    
  6. Use \q or CTRL-C to exit the SQL shell.

Step 9. Set up monitoring and alerting

Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.

For details about available monitoring options and the most important events and metrics to alert on, see Monitoring and Alerting.

Step 10. Scale the cluster

You can start the nodes manually or automate the process using systemd.

For each additional node you want to add to the cluster, complete the following steps:

  1. SSH to the machine where you want the node to run.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    icon/buttons/copy
    $ curl https://binaries.cockroachdb.com/cockroach-v1.1.9.linux-amd64.tgz \
    | tar -xz
    
  3. Copy the binary into the PATH:

    icon/buttons/copy
    $ cp -i cockroach-v1.1.9.linux-amd64/cockroach /usr/local/bin/
    

    If you get a permissions error, prefix the command with sudo.

  4. Run the cockroach start command just like you did for the initial nodes:

    icon/buttons/copy
    $ cockroach start \
    --certs-dir=certs \
    --host=<node4 address> \
    --locality=<key-value pairs> \
    --cache=.25 \
    --max-sql-memory=.25 \
    --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 \
    --background
    
  5. Update your load balancer to recognize the new node.

For each additional node you want to add to the cluster, complete the following steps:

  1. SSH to the machine where you want the node to run. Ensure you are logged in as the root user.

  2. Download the CockroachDB archive for Linux, and extract the binary:

    icon/buttons/copy
    $ curl https://binaries.cockroachdb.com/cockroach-v1.1.9.linux-amd64.tgz \
    | tar -xz
    
  3. Copy the binary into the PATH:

    icon/buttons/copy
    $ cp -i cockroach-v1.1.9.linux-amd64/cockroach /usr/local/bin/
    

    If you get a permissions error, prefix the command with sudo.

  4. Create the Cockroach directory:

    icon/buttons/copy
    $ mkdir /var/lib/cockroach
    
  5. Create a Unix user named cockroach:

    icon/buttons/copy
    $ useradd cockroach
    
  6. Move the certs directory to the cockroach directory.

    icon/buttons/copy
    $ mv certs /var/lib/cockroach/
    
  7. Change the ownership of Cockroach directory to the user cockroach:

    icon/buttons/copy
    $ chown -R cockroach.cockroach /var/lib/cockroach
    
  8. Download the sample configuration template:

    icon/buttons/copy
    $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v1.1/prod-deployment/securecockroachdb.service
    

    Alternatively, you can create the file yourself and copy the script into it:

    icon/buttons/copy
    [Unit]
    Description=Cockroach Database cluster node
    Requires=network.target
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/cockroach
    ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --join=<node1 address>:26257,<node2 address>:26257,<node3 address>:26257 --cache=.25 --max-sql-memory=.25
    TimeoutStopSec=60
    Restart=always
    RestartSec=10
    StandardOutput=syslog
    StandardError=syslog
    SyslogIdentifier=cockroach
    User=cockroach
    [Install]
    WantedBy=default.target
    
    

    Save the file in the /etc/systemd/system/ directory.

  9. Customize the sample configuration template for your deployment:

    Specify values for the following flags in the sample configuration template:

    Flag Description
    --host Specifies the hostname or IP address to listen on for intra-cluster and client communication, as well as to identify the node in the Admin UI. If it is a hostname, it must be resolvable from all nodes, and if it is an IP address, it must be routable from all nodes.

    If you want the node to listen on multiple interfaces, leave --host empty.

    If you want the node to communicate with other nodes on an internal address (e.g., within a private network) while listening on all interfaces, leave --host empty and set the --advertise-host flag to the internal address.
    --join Identifies the address and port of 3-5 of the initial nodes of the cluster.
  10. Repeat these steps for each additional node that you want in your cluster.

Step 11. Use the database

Now that your deployment is working, you can:

  1. Implement your data model.
  2. Create users and grant them privileges.
  3. Connect your application. Be sure to connect your application to the load balancer, not to a CockroachDB node.

You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see Configure Replication Zones.

See Also


Yes No
On this page

Yes No