What's New in v20.1.5

This version of CockroachDB is no longer supported. For more details, see the Release Support Policy.

August 31, 2020

This page lists additions and changes in v20.1.5 since v20.1.4.

  • For a comprehensive summary of features in v20.1, see the v20.1 GA release notes.
  • To upgrade to the latest production release of CockroachDB, see this article.

A denial-of-service (DoS) vulnerability is present in CockroachDB v20.1.0 - v20.1.10 due to a bug in protobuf. This is resolved in CockroachDB v20.1.11 and later releases. When upgrading is not an option, users should audit their network configuration to verify that the CockroachDB HTTP port is not available to untrusted clients. We recommend blocking the HTTP port behind a firewall.

For more information, including other affected versions, see Technical Advisory 58932.


CockroachDB introduced a critical bug in the v20.1.4 release that affects UPSERT and INSERT … ON CONFLICT DO UPDATE SET x = excluded.x statements involving more than 10,000 rows. All deployments running CockroachDB v20.1.4 and v20.1.5 are affected. A fix is included in v20.1.6.


Cockroach Labs has discovered a bug relating to incremental backups, for CockroachDB v20.1.0 - v20.1.13. If a backup coincides with an in-progress index creation (backfill), RESTORE, or IMPORT, it is possible that a subsequent incremental backup will not include all of the indexed, restored or imported data.

Users are advised to upgrade to v20.1.15 or later, which includes resolutions.

For more information, including other affected versions, see Technical Advisory 63162.

For more information, see Technical Advisory 54418.

Get future release notes emailed to you:

SQL language changes

  • Reduced memory used by table scans containing JSON data. #53318

Bug fixes

  • Fixed an internal error that could occur when an aggregate function argument contained a correlated subquery with another aggregate function referencing the outer scope. This now returns an appropriate user-friendly error, "aggregate function calls cannot be nested". #52142
  • Previously, subtracting months from a TIMESTAMP/DATE/TIMESTAMPTZ whose date value is greater than 28 could subtract an additional year. This bug is now fixed. #52156
  • Previously, CockroachDB could return incorrect results on queries that encountered ReadWithinUncertaintyInterval errors. This bug is now fixed. #52045
  • Fixed instances of slow plans for prepared queries involving CTEs or foreign key checks. #52205
  • Large write requests no longer have a chance of erroneously throwing a "transaction with sequence has a different value" error. #52267
  • Type OIDs in the result metadata were incorrect for the bit, bpchar, char(n), and varchar(n) types, and the corresponding array types. They are now correct. #52351
  • CockroachDB now prevents deadlocks on connection close with an open user transaction and temporary tables. #52326
  • Fixed a bug that could prevent schema changes for up to 5 minutes when using the COPY protocol. #52455
  • Executing a large number of statements in a transaction without committing could previously crash a CockroachDB server. This bug is now fixed. #52402
  • Fixed a bug causing the temporary object cleaner to get stuck trying to remove objects that it mistakenly thought were temporary. Note that no persistent data was deleted. The temporary cleaner simply returned an error because it thought certain persistent data was temporary. #52662
  • Previously, CockroachDB would erroneously restart the execution of empty, unclosed portals after they had been fully exhausted. This bug is now fixed. #52443
  • Fixed a bug causing the Google Cloud API client used by BACKUP, RESTORE and IMPORT to leak memory when interacting with Google Cloud Storage. #53229
  • CockroachDB no longer displays a value for gc.ttlseconds if not set. #52813

Performance improvements

  • Queries no longer block during planning if cached table statistics have become stale and the new statistics have not yet been loaded. Instead, the stale statistics are used for planning until the new statistics have been loaded. This improves performance because it prevents latency spikes that may occur if there is a delay in loading the new statistics. #52191


This release includes 31 merged PRs by 15 authors.

YesYes NoNo