The most highly evolved database on the planet. Born in Cloud. Architected to Scale and Survive.
CockroachDB is in production across thousands of modern cloud applications and services
Trust CockroachDB as the primary data store for even your most mundane apps and services
Store and process mission-critical data with limitless scale, guaranteed uptime and correctness
Manage transactional data and perform basic analytics in the database
Capabilities
CockroachDB delivers Distributed SQL, combining the familiarity of relational data with limitless, elastic cloud scale, bulletproof resilience… and more.
CockroachDB makes scale so simple, you don't have to think about it. It automatically distributes data and workload demand. Break free from manual sharding and complex workarounds.
Learn moreDowntime isn’t an option, and data loss destroys companies. CockroachDB is architected to handle unpredictability and survive machine, datacenter, and region failures.
Learn moreCorrect data is a must for mission-critical and even the most mundane applications. CockroachDB provides guaranteed ACID compliant transactions -- so you can trust your data is always right.
Learn moreWhere your data lives is critical in distributed systems. CockroachDB lets you pin each column of data to a specific location so you can reduce transaction latencies and comply with data privacy regulations.
Learn moreDevelopers
Build fast with a familiar interface and your favorite development environments. CockroachDB speaks standard SQL and supports many data access and Object Relational Mapping (ORM) tools.
conn = psycopg2.connect(
"postgresql://maxroach@localhost:26257/bank?sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
def create_accounts(conn):
with conn.cursor() as cur:
cur.execute(
"CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"
)
cur.execute("UPSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)")
logging.debug("create_accounts(): status message: %s", cur.statusmessage)
conn.commit()
def delete_accounts(conn):
with conn.cursor() as cur:
cur.execute("DELETE FROM bank.accounts")
logging.debug("delete_accounts(): status message: %s", cur.statusmessage)
conn.commit()
create_accounts(conn)
public static void main(String[] args) {
// Configure the database connection.
PGSimpleDataSource ds = new PGSimpleDataSource();
ds.setServerName("localhost");
ds.setPortNumber(26257);
ds.setDatabaseName("bank");
ds.setUser("maxroach");
ds.setPassword(null);
ds.setSsl(true);
ds.setSslMode("require");
ds.setSslRootCert("certs/client.root.crt");
ds.setSslCert("certs/client.maxroach.crt");
ds.setSslKey("certs/client.maxroach.key.pk8");
ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
ds.setApplicationName("BasicExample");
// Set up the 'accounts' table.
createAccounts();
// Insert a few accounts "by hand", using INSERTs on the backend.
int updatedAccounts = updateAccounts();
System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts);
}
public static void createAccounts(PGSimpleDataSource ds) {
String sql = "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))";
try (Connection connection = ds.getConnection()) {
connection.execSQLUpdate(sql);
}
}
public static int updateAccounts(PGSimpleDataSource ds) {
String sql1 = "INSERT INTO accounts (id, balance) VALUES (1, 1000)";
String sql2 = "INSERT INTO accounts (id, balance) VALUES (2, 250)";
try (Connection connection = ds.getConnection()) {
connection.execSQLUpdate(sql1);
connection.execSQLUpdate(sql2);
}
}
func main() {
config, err := pgx.ParseConfig("postgresql://maxroach@localhost:26257/bank?sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
if err != nil {
log.Fatal("error configuring the database: ", err)
}
config.TLSConfig.ServerName = "localhost"
// Connect to the "bank" database.
conn, err := pgx.ConnectConfig(context.Background(), config)
if err != nil {
log.Fatal("error connecting to the database: ", err)
}
defer conn.Close(context.Background())
// Create the "accounts" table.
if _, err := conn.Exec(context.Background(),
"CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
log.Fatal(err)
}
// Insert two rows into the "accounts" table.
if _, err := conn.Exec(context.Background(),
"INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
log.Fatal(err)
}
# Connect to the "bank" database.
conn = PG.connect(
user: 'maxroach',
dbname: 'bank',
host: 'localhost',
port: 26257,
sslmode: 'require',
sslrootcert: 'certs/ca.crt',
sslkey: 'certs/client.maxroach.key',
sslcert: 'certs/client.maxroach.crt'
)
# Create the "accounts" table.
conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
# Insert two rows into the "accounts" table.
conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
Architects
CockroachDB allows you to architect a data layer that will scale with your application no matter how big and how far it may expand.
DevOps & SRE
CockroachDB is designed to simplify cloud scale deployment and allow you to avoid planned and unplanned downtime.
Architects
CockroachDB was architected and built from the ground up for Kubernetes and microservices
It is the only database designed to deliver on the core distributed principles of atomicity, scale and survival so you can manage your database IN Kubernetes, not along the side of it.
Get CockroachDB on KubernetesCapabilities
CockroachDB is an evolution of the database, constructed from the ground up to scale and survive while delivering on all the promises of a traditional relational database. It is the future of data.
Use familiar relational concepts via most available PostgreSQL tools
Get guaranteed atomicity, isolation, consistency, and durability at the row-level
Store semi-structured data for business flexibility without sacrificing consistency
Store and index Spatial data types with familiar, PostGIS-compatible SQL syntax
Get automatic query optimizations and flexibility to tune SQL manually
Update table schema without any downtime or negative consequences on your application
Get the most performant query plan out of thousands via automatically generated statistics
Serve low-latency, consistent, and current reads from the closest data
Serve low-latency, consistent, historical reads from the closest data
Automatic and continuous rebalancing of data between the nodes of a cluster
Automatic repair of missing data after failures, using unaffected replicas as sources
Use “replication zones” to control the number and location of specific sets of data - from cluster-wide to row-level
Progress can be made as long as a majority of nodes is available, preventing RTO if a node goes down
Span a cluster across regions and use data topologies to get the right latency and resiliency
Leverage CockroachDB’s environment-agnostic, no-dependency binary to run across cloud platforms, or hybrid across clouds and on-prem data centers
Efficiently back up your cluster to popular cloud services such as AWS S3, Google Cloud Storage, or NFS for the unlikely case that data needs to be restored
Use row-level controls to keep data close to users for low-latency reads and writes and regulatory compliance
Upgrade to new versions of CockroachDB without interrupting a cluster’s overall health and operations
Schedule backups directly from your cluster, and rest easy knowing your backups are resilient
Visualize the geographic configuration of a cluster on a world map with real-time cluster metrics
Get essential metrics about a cluster’s overall health and performance via the Admin UI, CLI, and various programmable endpoints
Group users into roles to simplify the management of SQL privileges for authenticated users.
Encrypt all intra-cluster and client-cluster network communication via TLS 1.2.
Encrypt all CockroachDB data on disk using AES in counter mode, with all key sizes allowed.
Use the Generic Security Services API (GSSAPI) to integrate with existing LDAP directory services within your organization.
Quickly get large sets of data out of CockroachDB in a format that can be ingested by downstream systems.
Output the SQL statements required to recreate tables, views, and sequences.
Pull exported time series metrics into popular third-party monitoring and graphing tools
Efficiently feed row-level changes into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing
Efficiently import entire tables and add rows in bulk to existing tables.
Free, community-based guidance on database usage and troubleshooting
Cockroach Labs offers a public slack channel for support and a channel for private customer communication is also available
Paid 24/7 access to dedicated staff
Access the source code to understand how the system works and how to extend it to meet your requirements.
No credit card. No commitment.
1
Connect to our secure
connection string
2
Instantly start reading
and writing data
3
Never think about ops
or capacity again