
Ideal workloads
System of Record: Optimized for transactional workloads that require strong consistency and global distribution, such as AI innovators, cybersecurity, eCommerce & retail, financial services, fintech/payments, gaming, quant/trading & research, and online travel
Mixed OLTP + light analytics, reporting, and BI, such as financial services, e‑Commerce and retail, B2B apps, public sector, and healthcare

Architecture
Distributed SQL, shared-nothing, peer-to-peer: All nodes symmetrical, any node can handle reads/writes. Architected to span globally distributed datacenters, yet still valuable in a single datacenter. Clusters use distributed consensus: No matter where data lives, every node can access data anywhere in cluster
Single server, single instance, monolithic single master: Primary node handles writes; read replicas can handle reads but rely on primary for updates

Resilience
Yes - High Availability: Survives node/disk/rack/region failures automatically via Raft consensus, with zero data loss (RPO=0). Naturally resilient to outages with granular row-level control
No: Failover is manual or requires external tools; synchronization between primary and secondary can be complex; risk of data loss during async replication failover

Scale
Horizontal (Scale-out) - Automatic: Increase storage and throughput capacity linearly, simply by adding more nodes
Vertical (Scale-up): Scales by increasing hardware resources (CPU/RAM) on the single primary node. When vertical scale is maxed out, must shift to horizontal scale, which means manual sharding

Auto-Sharding (Dynamic re-sharding online)
Yes - Native & Automatic: Automatically shards data into ranges and dynamically splits, merges, and rebalances online across nodes based on load and size
No - Manual: Number of shards fixed at table creation; requires manual partitioning or third-party extensions to shard data across multiple nodes; no dynamic resharding without downtime

Availability including Multi-Cloud
Available on all public clouds (e.g., AWS, Google Cloud, Azure); can run a single logical cluster spanning multiple clouds. Can run on prem/local, and cloud plus prem hybrid deployments
Tied to each public cloud provider; AWS or Google Cloud only

Multi-region
Active-Active: Read/Write from any node in any region; built-in low-latency local access patterns and Survival Goals (e.g., ALTER DATABASE ... SURVIVE REGION FAILURE) commands configure fault tolerance intent
Active-Passive: Managing region-specific data requires complex setup of cascading replicas, logic in the application layer, or extensions. Infrastructure-based: Survival determined by architecture (e.g., number of standby nodes provisioned)

Data residency
Row-Level Control: Can pin specific rows to specific geographic regions (e.g., "User A's data stays in EU") using REGIONAL BY ROW command
Table and Instance-Level Control: Requires either separate database instances per region or complex manual partitioning

Hybrid and multi-cloud deployment
Native: Can run single logical cluster spanning multiple clouds (e.g., AWS, Google Cloud, Azure) or hybrid (On-prem + Cloud) seamlessly
Complex: Technically possible but requires VPNs, complex networking, and external management tools to sync current state

Automatic Geo Partitioning
Yes - Native: Automatically moves data to the region where it is most frequently accessed: "data follows user." Supports geo-partitioning with zone configurations for data locality, compliance, and low latency
No - Manual: Requires setting up specific table partitions and manually routing application traffic to correct partition; no support for geo-partitioning or multi-region clusters

Transactional consistency
Distributed ACID with serializable isolation by default guarantees strict consistency across all nodes and regions using distributed consensus
ACID: Strict consistency on single primary node and eventual consistency on async read replicas

Distributed ACID Transactions
Yes: Fully supported with serializable isolation using distributed consensus (Raft Protocol); strong ACID guarantees
No: Not native. Requires Two-Phase Commit (2PC) orchestration or extensions, and even then supported only across partitions, not across nodes

Transaction Isolation Levels
Serializable (strongest standard isolation level) plus Read Committed
Read Committed only

Multi-Active
Yes: Fully multi-active multi-region; read/write and handle connection requests from any node in the cluster
No: Writes must go to single Primary node

Required downtime
Near Zero: Online schema changes, rolling upgrades, and cluster expansion occur without taking database offline
Moderate: Major version upgrades and some schema changes require maintenance windows or logical replication setups

Follower Reads
Supports follower/replica reads with Bounded (controlled) Staleness, allowing low-latency local reads from nearby replicas while keeping strong global ordering
Read Replicas: Reads from replicas are eventually consistent; Staleness is undefined/variable depending on replication lag

Migrations
MOLT (Migrate Off Legacy Technology) Toolkit & change data capture (CDC): MOLT handles schema conversion/verification and CDC moves data out
Logical Replication: Native logical replication and tool support (pg_dump, various ETL tools)

Vector Search
Advanced (via pgvector): pgvector extension is the industry standard for vector similarity search. Vector is built into the core platform
Advanced (via pgvector): pgvector extension is the industry standard for vector similarity search

Change Data Capture (CDC)
Native (Core): CHANGEFEED command enables scalable, resilient streaming of data changes to Kafka/Cloud Storage
Logical Decoding: Native support via WAL (Write-Ahead Log) decoding, but requires external connectors

Foreign Keys Support
Enforced across the distributed cluster at commit time
Standard enforcement

SQL Compatibility
Wire Compatible (High): Uses PG wire protocol; strong ANSI SQL with complex queries, joins, window functions, triggers, stored procedures, and UDFs
Strong ANSI SQL support including joins, UDFs, stored procedures, and triggers

Triggers & Deferrable Constraints
Supports triggers and deferrable constraints
Supports triggers and deferrable constraints

Stored Procedures
Mature: PL/pgSQL and other languages such as Python and Perl support deep logic capabilities
Mature: PL/pgSQL and other languages such as Python and Perl support deep logic capabilities

Pricing
Commercial Enterprise: Simple, straightforward pricing, plus the ability to tie data to a location to avoid egress costs; free for single-node/dev; free Community tier
Free to download and run; initial costs are for infrastructure or third-party managed services; ongoing support costs from external providers can quickly add up

Freedom
Free to run anywhere and across multiple clouds; Business Source License (BSL) but Source Available; full commercial-grade support directly from CockroachDB
Open source license gives freedom to use, modify, and resell without restriction—but users must rely on non-guaranteed voluntary support from open source community