What's New in v21.1.9

September 20, 2021

Get future release notes emailed to you:

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v21.1.9

SQL language changes

Operational changes

  • A new cluster setting, sql.mutations.max_row_size.log, was added, which controls large row logging. Whenever a row larger than this size is written (or a single column family if multiple column families are in use) a LargeRow event is logged to the SQL_PERF channel (or a LargeRowInternal event is logged to SQL_INTERNAL_PERF if the row was added by an internal query). This could occur for INSERT, UPSERT, UPDATE, CREATE TABLE AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, or RESTORE statements. SELECT, DELETE, TRUNCATE, and DROP are not affected by this setting. This setting is disabled by default. #69946
  • A new cluster setting, sql.mutations.max_row_size.err, was added, which limits the size of rows written to the database (or individual column families, if multiple column families are in use). Statements trying to write a row larger than this will fail with a code 54000 (program_limit_exceeded) error. Internal queries writing a row larger than this will not fail, but will log a LargeRowInternal event to the SQL_INTERNAL_PERF channel. This limit is enforced for INSERT, UPSERT, and UPDATE statements. CREATE TABLE AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, and RESTORE will not fail with an error, but will log LargeRowInternal events to the SQL_INTERNAL_PERF channel. SELECT, DELETE, TRUNCATE, and DROP are not affected by this limit. Note that existing rows violating the limit cannot be updated, unless the update shrinks the size of the row below the limit, but can be selected, deleted, altered, backed-up, and restored. For this reason we recommend using the accompanying setting sql.mutations.max_row_size.log in conjunction with SELECT pg_column_size() queries to detect and fix any existing large rows before lowering sql.mutations.max_row_size.err. This setting is disabled by default. #69946
  • New variables sql.mutations.max_row_size.{log|err} were renamed to sql.guardrails.max_row_size_{log|err} for consistency with other variables and metrics. #69946
  • Added four new metrics: sql.guardrails.max_row_size_log.count, sql.guardrails.max_row_size_log.count.internal, sql.guardrails.max_row_size_err.count, and sql.guardrails.max_row_size_err.count.internal. These metrics are incremented whenever a large row violates the corresponding sql.guardrails.max_row_size_{log|err} limit. #69946

DB Console changes

  • A CES survey link component was added to support being able to get client feedback. #68517

Bug fixes

  • Fixed a bug where running IMPORT PGDUMP with a UDT would result in a null pointer exception. This change makes it fail gracefully. #69249
  • Fixed a bug where the schedules.backup.succeeded and schedules.backup.failed metrics would sometimes not be updated. #69256
  • The correct format code will now be returned when using COPY FROM .. BINARY. #69278
  • Fixed a bug where COPY FROM ... BINARY would return an error if the input data was split across different messages. #69278
  • Fixed a bug where COPY FROM ... CSV would require each CopyData message to be split at the boundary of a record. The COPY protocol allows messages to be split at arbitrary points. #69278
  • Fixed a bug where COPY FROM ... CSV did not correctly handle octal byte escape sequences such as \011 when using a BYTES column. #69278
  • Fixed an oversight in the data generator for TPC-H which was causing a smaller number of distinct values to be generated for p_type and p_container in the part table than the spec calls for. #68710
  • Fixed a bug that was introduced in v21.1.5, which prevented nodes from being decommissioned in a cluster if the cluster had multiple nodes intermittently miss their liveness heartbeat. #68552
  • Fixed a bug introduced in v21.1 where CockroachDB could return an internal error when performing streaming aggregation in some edge cases. #69181
  • Fixed a bug that created non-partial unique constraints when a user attempted to create a partial unique constraint in ALTER TABLE statements. #68745
  • Fixed a bug where a DROP VIEW ... CASCADE could incorrectly result in "table ...is already being dropped" errors. #68618
  • Fixed a bug introduced in v21.1 where the output of SHOW CREATE TABLE on tables with hash-sharded indexes was not round-trippable. Executing the output would not create an identical table. This has been fixed by showing CHECK constraints that are automatically created for these indexes in the output of SHOW CREATE TABLE. #69695
  • Fixed internal or "invalid cast" error in some cases involving cascading updates. #69180
  • Fixed a bug with cardinality estimation in the optimizer that was introduced in v21.1.0. This bug could cause inaccurate row count estimates in queries involving tables with a large number of null values. As a result, it was possible that the optimizer could choose a suboptimal plan. #69125
  • Fixed a bug introduced in v20.2 that caused internal errors with set operations, like UNION, and columns with tuple types that contained constant NULL values. #69271
  • Added backwards compatibility between v21.1.x cluster versions and the v21.1.8 cluster version. #69894
  • Fixed a bug where table stats collection issued via EXPLAIN ANALYZE statements or via CREATE STATISTICS statements without specifying the AS OF SYSTEM TIME option could run into flow: memory budget exceeded. #69588
  • Fixed a bug where an internal error or a crash could occur when some crdb_internal built-in functions took string-like type arguments (e.g. name). #69993
  • Fixed all broken links to the documentation from the DB Console. #70117
  • Previously, when using ALTER PRIMARY KEY on a REGIONAL BY ROW table, the copied unique index from the old primary key would not have the correct zone configurations applied. This bug is now fixed. Users who have encountered this bug should re-create the index. #69681

Performance improvements

  • Lookup joins on partial indexes with virtual computed columns are not considered by the optimizer, resulting in more efficient query plans in some cases. #69110
  • Updated the optimizer cost model so that, all else being equal, the optimizer prefers plans in which LIMIT operators are pushed as far down the tree as possible. This can reduce the number of rows that need to be processed by higher operators in the plan tree, improving performance.#69977

Contributors

This release includes 40 merged PRs by 19 authors.

YesYes NoNo