November 12, 2019
With the release of CockroachDB v19.2, we’ve made a variety of performance, resiliency, and usability improvements. Check out a comprehensive summary of the most significant user-facing changes and then upgrade to CockroachDB v19.2. You can also read more about these changes in the v19.2 blog post or sign up for our live webinar on November 19th.
Get future release notes emailed to you:
In addition to v19.2, we're launching Cockroach University, a free, online learning tool for developers and architects who want to gain a fundamental understanding of distributed databases and deep knowledge of CockroachDB’s functionality and architecture. The first course, “Getting Started with CockroachDB,” uses videos, exercises, and quizzes to teach the key characteristics of a distributed SQL database, walking you through how to spin up a CockroachDB instance, run basic queries, and test out CockroachDB’s unique capabilities. For additional information or to register, visit university.cockroachlabs.com.
$ docker pull cockroachdb/cockroach:v19.2.0
This section summarizes the most significant, user-facing changes in v19.2.0. For a complete list of features and changes, including bug fixes and performance improvements, see the release notes for previous testing releases.
- Core features
- Enterprise features
- Backward-incompatible changes
- Known limitations
In addition to the features below, SQL support for timestamp objects with a precision value, present in earlier testing releases of v19.2, has been reverted.
CockroachCloud will soon offer a completely self-service account creation and management workflow as well as single-region cluster creation in AWS and GCP using a credit card. Sign up for the beta program here.
These features are freely available in the core version and do not require an enterprise license.
The core version of CockroachDB v19.2 uses the Business Source License instead of the Apache 2.0 License. For more information on why we changed our licensing approach and some practical questions and implications, see our blog post and Licensing FAQs. The full details of the license can be found on Github.
|Parallel Commits||CockroachDB's new optimized atomic commit protocol cuts the commit latency of a transaction in half, from two rounds of consensus down to one. Combined with transaction pipelining, parallel commits brings the latency incurred by common OLTP transactions to near the theoretical minimum: the sum of all read latencies plus one round of consensus latency. This especially lowers latency for transactions involving secondary indexes both in multi-region and single-region deployments.|
|Vectorized Query Execution||CockroachDB now supports column-oriented ("vectorized") query execution for operations that are guaranteed to execute in memory on tables with supported data types.|
|Bulk Import Improvements||CockroachDB now supports efficiently loading large amounts of CSV data into existing tables using the new
Also, for the previously existing
|Data Replication Reports||Several new and updated system tables can help you query the status of your cluster's data replication, data placement, and replication zone constraint conformance. For example, you can use these reports to see what data is under-replicated or unavailable, to show which of your localities (if any) are vulnerable to data unavailability in more common failure scenarios, and to see if any of your cluster's data placement constraints are being violated.|
|Multi-Use CTEs||Common Table Expressions can now be re-used multiple times in the same query via a
|Comprehensive Cost-Based Optimizer||All SQL queries now leverage the cost-based optimizer to choose the lowest cost plans, including DDL statements and window functions that previously leveraged the legacy heuristic planner.|
|Ordering Aggregations||Non-commutative aggregate functions are sensitive to the order in which rows are processed. This order can now be controlled with an
|Index Hints for
||It's now possible to force the use of a specific index for deleting rows and updating rows.|
|Streaming with JDBC||CockroachDB now provides limited support for Postgres wire-protocol cursors for implicit and explicit transactions executed to completion. The Java JDBC driver can use this protocol to stream queries with large result sets, providing much faster performance than result paginating with
|Transaction Latency Graphs||The SQL Dashboard in the Admin UI now provides timeseries graphs of p90 and p99 transaction latencies to complement the per-statement metrics on the Statements page.|
||When using the
||It's now possible to add the
|Local Testing Improvements||CockroachDB v19.2 includes several usability improvements to running CockroachDB locally for SQL testing and app development. First, the
|Cluster Startup Improvements||There are now distinct methods for starting single-node and multi-node clusters. For multi-node clusters, start each node with the
For single-node clusters, use the new
|Built-in Workload Improvements||Cockroach Lab's fictional vehicle-sharing app, MovR, is now available as a sample workload using the
|Data Type Improvements||
Also, you can now use the
|Interactive SQL Shell Commands||Within the interactive SQL shell, the
|Removing Manual Splits||The new
||Because CockroachDB only supports
|Showing Row Location||The new
|Viewing Node Locality||It's now easy to retrieve the localities of nodes for setting zone configuration constraints via the
|Viewing Complete Jobs||By default, the
|Viewing Comments for Virtual Tables||Using
These features require an enterprise license. Register for a 30-day trial license here, or consider testing enterprise features locally using the
cockroach demo CLI command, which starts an in-memory CockroachDB cluster with a temporary enterprise license pre-loaded.
|Backup & Restore Improvements||CockroachDB now supports locality-aware backup and restore such that each node writes to and restores from files in its locality. This can reduce cloud storage data transfer costs by keeping data within cloud regions and help you comply with data domiciling requirements.
|Geo-Partitioning Improvements||CockroachDB v19.2 includes several usability improvements to geo-partitioning. First, it's now possible to name partitions identically across indexes of a table (e.g.,
Next, it's easy to retrieve the localities of nodes for setting zone configuration constraints via the
There are now also several ways to view the details of partitions and confirm they are in effect, from the outputs of
Finally, it's now much easier to efficiently query partitioned data; when filtering by the column directly following the partitioned prefix in the primary key, the cost-based optimizer creates a query plan that scans each partition in parallel, rather than performing a costly sequential scan of the entire table. Filtering by the partition value itself can further improve performance by limiting the scan to the specific partition(s) that contain the data that you are querying.
Before upgrading to CockroachDB v19.2.0, be sure to review the following backward-incompatible changes and adjust your application as necessary.
IMPORTSQL statement no longer accepts quotes inside unquoted CSV fields.
CONFIGURE ZONESQL statements now fail if the user does not have sufficient privileges. If the target is a
systemdatabase, or a table in the
systemdatabase, the user must have an admin role. For all other databases and tables, the user must have the
CREATEprivilege on the target database or table.
This change might be backward-incompatible for users running scripted
CONFIGURE ZONEstatements with restricted permissions. To add the necessary permissions, use
GRANT<roles> as a user with an admin role. For example, to grant a user the admin role, run
GRANT admin TO <user>. To grant the
CREATEprivilege on a database or table, run
GRANT CREATE ON [DATABASE | TABLE] <name> TO <user>.
FLOATcolumns of less than the max width will now be returned as their own type via the binary protocol. For example, an
int4column will be returned in 32 bits over the pgwire binary protocol instead of 64 bits.
For information about new and unresolved limitations in CockroachDB v19.2, with suggested workarounds where applicable, see Known Limitations.
|Performance Benchmarking||Added an overview of CockroachDB's performance profiles (scaling, throughput, latency), based on Cockroach Lab's extensive testing using industry-standard benchmarks like TPC-C and Sysbench, as well as detailed instructions for reproducing our TPC-C benchmarking results at different scales.|
|Multi-Region Deployment||Updated the tutorial on getting low latency reads and writes in a multi-region cluster to feature two of the most important multi-region data topologies for dramatically reducing the impact of network latency, Geo-Partitioned Replicas and Duplicate Indexes.|
|Orchestration with Kubernetes||Expanded the tutorial on Kubernetes single-region deployment to cover running on Amazon's hosted EKS and naming CSR naming requirements for secure deployments. Also updated and expanded the instructions on using Helm.|
|Client-Side Transaction Retries||Updated and simplified the client-side transaction logic in the Java, Python, and Go getting started tutorials and code samples. Also added pseudocode to help with the implementation of this logic in other languages as well as instructions for authors of database drivers and ORMs who would like to implement client-side retries in their database driver or ORM for maximum efficiency and ease of use by application developers.|
|SQL Tuning with
||Added a tutorial on how to use
|Testing with MovR Dataset||Added an overview of MovR, CockroachDB's fictional vehicle-sharing dataset and application, and updated several SQL pages and examples to use the built-in MovR dataset, for example, Learn CockroachDB SQL.|
|Migration from Oracle||Added guidance on migrating from Oracle, including the process of converting schema and exporting data for loading into CockroachDB.|
|App Deployment on CockroachCloud||Added a tutorial on running a sample To-Do app in Kubernetes with CockroachCloud as the datastore. The app is written in Python with Flask as the web framework and SQLAlchemy for working with SQL data, and the code is open-source and forkable.|