How we built a 40x faster hash joiner using vectorized execution.
CockroachDB's Consistency Model fits somewhere between serializability and linerarizability. We're proposing a new marketing phrase for CRDB's guarantees: no stale reads.
CockroachDB uses RocksDB for its storage engine because of RocksDB's rich feature set, which is necessary for a complex product like a distributed SQL database.
This post introduces Transactional Pipelining which dramatically speeds up distributed transactions with respect to the latency cost of distributed consensus.
This blog covers the practical experience of running a distributed system across multiple Kubernetes clusters including what makes it challenging and what solutions are available (some of which we run in production).
Customers rely on us to help navigate the complexities of the increasingly competitive cloud wars. This inspired the 2018 Cloud Computing Report, where we benchmark performance, latency, CPU, network, I/O, and cost of AWS and GCP.
CockroachDB 2.1 is 50x more scalable than Amazon Aurora at less than 2% of the price per tpmC. Read on to see performance benchmarks of CockroachDB 2.1, including the latest TPC-C results.
Product Manager Lakshmi Kannan uses Flex Fridays for professional development and learning the ins and outs of CockroachDB.
CockroachDB now supports importing MySQL database dump files: you can now import your data from the most popular open-source database to our modern, globally-distributed SQL database with a single command.
CockroachDB 2.1 includes a brand-new, built-from-scratch, cost-based SQL optimizer. This post explains what a cost-based SQL optimizer is, and tells the story of how we decided we really, really needed one.