This page shows you how to reproduce CockroachDB's TPC-C performance benchmarking results on commodity AWS hardware. Across all scales, CockroachDB can process tpmC (new order transactions per minute) at near maximum efficiency. Start by choosing the scale you're interested in:
|Workload||Cluster size||Warehouses||Data size|
|Local||3 nodes on your laptop||10||2 GB|
|Small||3 nodes on
|Medium||15 nodes on
|Large||81 nodes on
Before you begin
TPC-C provides the most realistic and objective measure for OLTP performance at various scale factors. Before you get started, consider reviewing what TPC-C is and how it is measured.
Make sure you have already installed CockroachDB.
Step 1. Start CockroachDB
--insecure flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead.
cockroach startcommand to start 3 nodes:
$ cockroach start \ --insecure \ --store=tpcc-local1 \ --listen-addr=localhost:26257 \ --http-addr=localhost:8080 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
$ cockroach start \ --insecure \ --store=tpcc-local2 \ --listen-addr=localhost:26258 \ --http-addr=localhost:8081 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
$ cockroach start \ --insecure \ --store=tpcc-local3 \ --listen-addr=localhost:26259 \ --http-addr=localhost:8082 \ --join=localhost:26257,localhost:26258,localhost:26259 \ --background
cockroach initcommand to perform a one-time initialization of the cluster:
$ cockroach init \ --insecure \ --host=localhost:26257
Step 2. Import the TPC-C dataset
cockroach workload to load the initial schema and data:
$ cockroach workload fixtures import tpcc \ --warehouses=10 \ 'postgresql://root@localhost:26257?sslmode=disable'
This will load 2 GB of data for 10 "warehouses".
Step 3. Run the benchmark
Run the workload for ten "warehouses" of data for ten minutes:
$ cockroach workload run tpcc \ --warehouses=10 \ --ramp=3m \ --duration=10m \ 'postgresql://root@localhost:26257?sslmode=disable'
You'll see per-operation statistics every second:
Initializing 20 connections... Initializing 100 workers and preparing statements... _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) 1.0s 0 0.0 0.0 0.0 0.0 0.0 0.0 delivery 1.0s 0 0.0 0.0 0.0 0.0 0.0 0.0 newOrder ... 105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 delivery 105.0s 0 4.0 1.8 44.0 46.1 46.1 46.1 newOrder 105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 orderStatus 105.0s 0 1.0 2.0 14.7 14.7 14.7 14.7 payment 105.0s 0 0.0 0.2 0.0 0.0 0.0 0.0 stockLevel ...
tpcc options, use
cockroach workload run tpcc --help. For details about other built-in load generators, use
cockroach workload run --help.
Step 4. Interpret the results
workload has finished running, you'll see a final output line:
_elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) 300.0s 121.6 94.6% 41.0 39.8 54.5 71.3 96.5 130.0
You will also see some audit checks and latency statistics for each individual query. For this run, some of those checks might indicate that they were
SKIPPED due to insufficient data. For a more comprehensive test, run
workload for a longer duration (e.g., two hours). The
tpmC (new order transactions/minute) number is the headline number and
efc ("efficiency") tells you how close CockroachDB gets to theoretical maximum
The TPC-C specification has p90 latency requirements in the order of seconds, but as you see here, CockroachDB far surpasses that requirement with p90 latencies in the tens of milliseconds.
Step 5. Clean up
When you're done with your test cluster, use the
cockroach quitcommand to gracefully shut down each node.
$ cockroach quit --insecure --host=localhost:26257
$ cockroach quit --insecure --host=localhost:26258Note:
For the last node, the shutdown process will take longer (about a minute each) and will eventually force the node to stop. This is because, with only 1 of 3 nodes left, all ranges no longer have a majority of replicas available, and so the cluster is no longer operational.
$ cockroach quit --insecure --host=localhost:26259
To restart the cluster at a later time, run the same
cockroach startcommands as earlier from the directory containing the nodes' data stores.
If you do not plan to restart the cluster, you may want to remove the nodes' data stores:
$ rm -rf tpcc-local1 tpcc-local2 tpcc-local3