What's New in v22.1

Get future release notes emailed to you:

To upgrade to v22.1, see Upgrade to CockroachDB v22.1.

v22.1.5

Release Date: July 28, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.5

SQL language changes

  • AS OF SYSTEM TIME now takes the time zone into account when converting to UTC. For example: 2022-01-01 08:00:00-04:00 is now treated the same as 2022-01-01 12:00:00 instead of being interpreted as 2022-01-01 08:00:00 #84663

DB Console changes

  • Updated labels from "date range" to "time interval" on time picker (custom option, preset title, previous and next arrows) #84517
  • Removed View Statement Details link inside the Session Details page. #84502
  • Updated the message when there is no data on the selected time interval on the Statements and Transactions pages. #84623

Bug fixes

  • Fixed a conversion on the jobs endpoint, so that the Jobs page won't return a 500 error when a job contained an error with quotes. #84464
  • The 'Parse', 'Bind', and 'Execute' pgwire commands now return an error if they are used during an aborted transaction. COMMIT and ROLLBACK statements are still allowed during an aborted transaction. #84329
  • Sorting on the plans table inside the Statement Details page is now properly working. #84627
  • Fixed a bug that could cause unique indexes to be unexpectedly dropped after running an ALTER PRIMARY KEY statement, if the new PK column set is a subset of the old PK column set.#84570
  • Fixed a bug where some statements in a batch would not get executed if the following conditions were met:
    • A batch of statements is sent in a single string.
    • A BEGIN statement appears in the middle of the batch.
    • The enable_implicit_transaction_for_batch_statements session variable is set to true. (This defaults to false in v22.1) This bug was introduced in v22.1.2. #84593
  • Previously, CockroachDB could deadlock when evaluating analytical queries if multiple queries had to spill to disk at the same time. This is now fixed by making some of the queries error out instead. If you know that there is no deadlock and that some analytical queries that have spilled are just taking too long, blocking other queries from spilling, you can adjust newly introduced sql.distsql.acquire_vec_fds.max_retries cluster setting (use 0 to enable the previous behavior of indefinite waiting until spilling resources open up). #84657
  • Fixes a bug where cluster restores of older backups would silently clobber system tables or fail to complete. #84904
  • Fixed a bug that was introduced in v21.2 that could cause increased memory usage when scanning a table with wide rows. #83966
  • Fixed a bug in the concat projection operator on arrays that gave output of nulls when the projection operator can actually handle null arguments and may result in a non-null output. #84615
  • Reduced foreground latency impact when performing changefeed backfills by adjusting changefeed.memory.per_changefeed_limit cluster setting to 128MiB (Enterprise only). #84702

Contributors

This release includes 30 merged PRs by 17 authors.

v22.1.4

Release Date: July 19, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.4

Security updates

SQL language changes

  • crdb_internal.validate_ttl_scheduled_jobs and crdb_internal.repair_ttl_table_scheduled_job can now only be run by users with the admin role. #83972
  • txn_fingerprint_id has been added to crdb_internal.node_statement_statistics. The type of the column is NULL or STRING. #84020
  • The sampled query telemetry log now includes session, transaction, and statement IDs, as well as the database name of the query. #84026
  • crdb_internal.compact_engine_spans can now only be run by users with the admin role. #84095

DB Console changes

  • Updated User column name to User Name and fixed High-water Timestamp column tooltip on the Jobs page. #83914
  • Added the ability to search for exact terms in order when wrapping a search in quotes. #84113

Bug fixes

  • A flush message sent during portal execution in the pgwire extended protocol no longer results in an error. #83955
  • Previously, virtual computed columns which were marked as NOT NULL could be added to new secondary indexes. Now, attempts to add such columns to a secondary index will result in an error. Note that such invalid columns can still be added to tables. Work to resolve that bug is tracked in #81675. #83551
  • Statement and transaction statistics are now properly recorded for implicit transactions with multiple statements. #84020
  • The SessionTransactionReceived session phase time is no longer recorded incorrectly (which caused large transaction times to appear in the Console) and has been renamed to SessionTransactionStarted. #84030
  • Fixed a rare issue where the failure to apply a Pebble manifest change (typically due to block device failure or unavailability) could result in an incorrect LSM state. Such a state would likely result in a panic soon after the failed application. This change alters the behavior of Pebble to panic immediately in the case of a failure to apply a change to the manifest. #83735
  • Fixed a bug which could crash nodes when visiting the DB Console Statements page. This bug was present since version v21.2.0. #83714
  • Moved connection OK log and metric to same location after auth completes for consistency. This resolves an inconsistency (see linked issue) in the DB Console where the log and metric did not match. #84103
  • CockroachDB previously would not normalize timestamp/timestamptz - timestamp/timestamptz like PostgreSQL does in some cases (depending on the query). This is now fixed. #83999
  • Custom time period selection is now aligned between the Metrics and SQL Activity pages in the DB Console. #84184
  • Fixed a critical bug (#83687) introduced in v22.1.0 where a failure to transfer a lease in the joint config may result in range unavailability. The fix allows the original leaseholder to reacquire the lease so that lease transfer can be retried. #84145
  • Fixed a minor bug that caused internal errors and poor index recommendations when running EXPLAIN statements. #84220
  • Fixed a bug where ALTER TABLE ... SET LOCALITY REGIONAL BY ROW could leave the region ENUM type descriptor unaware of a dependency on the altered table. This would, in turn, wrongly permit a DROP REGION to succeed, rendering the table unusable. Note that this fix does not help existing clusters which have already run such an ALTER TABLE; see #84322 for more information on this case. #84339
  • Fixed a bug that could cause internal errors in rare cases when running queries with GROUP BY clauses. #84307
  • Fixed a bug in transaction conflict resolution which could allow backups to wait on long-running transactions. #83900
  • Fixed an internal error node ... with MaxCost added to the memo that could occur during planning when calculating the cardinality of an outer join when one of the inputs had 0 rows. #84377

Contributors

This release includes 42 merged PRs by 26 authors.

v22.1.3

Release Date: July 11, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.3

Enterprise edition changes

  • Added the ability to provide short-lived OAuth 2.0 tokens as a form of short-lived credentials to Google Cloud Storage and KMS. The token can be passed to the GCS or KMS URI via the new BEARER_TOKEN parameter for "specified" authentication mode.

    Example GCS URI: gs://<bucket>/<key>?AUTH=specified&BEARER_TOKEN=<token>

    Example KMS URI: gs:///<key_resource>?AUTH=specified&BEARER_TOKEN=<token>

    There is no refresh mechanism associated with this token, so it is up to the user to ensure that its TTL is longer than the duration of the job or query that is using the token. The job or query may irrecoverably fail if one of its tokens expire before completion. #83210

SQL language changes

  • CockroachDB now sends the Severity_Nonlocalized field in the pgwire Notice Response. #82939
  • Updated the pg_backend_pid() built-in function to match the data in the query cancellation key created during session initialization. This function is just for compatibility, and it does not return a real process ID. #83167
  • The log fields for captured index usage statistics are no longer redacted #83293
  • CockroachDB now returns a message instructing users to run hash-sharded index creation statements from a pre-v22.1 node, or just wait until the upgrade is finalized, when the cluster is in a mixed state during a rolling upgrade. Previously, we simply threw a descriptor validation error. #83556
  • The sampled query telemetry log now includes a plan gist field. The plan gist field provides a compact representation of a logical plan for the sampled query. The field is written as a base64-encoded string. #83643
  • The error code reported when trying to use a system or virtual column in the STORING clause of an INDEX has been changed from XXUUU (internal error) to 0A000 (feature not supported). #83648
  • Foreign keys can now reference the crdb_region column in REGIONAL BY ROW tables even if crdb_region is not explicitly part of a UNIQUE constraint. This is possible since crdb_region is implicitly included in every index on REGIONAL BY ROW tables as the partitioning key. This applies to whichever column is used as the partitioning column, in case a different name is used with a REGIONAL BY ROW AS... statement. #83815

Operational changes

  • Disk stalls no longer prevent the CockroachDB process from crashing when Fatal errors are emitted. #83127
  • Added a new cluster setting bulkio.backup.checkpoint_interval which controls the minimum interval between writes of progress checkpoints to external storage. #83266
  • The application name associated with a SQL session is no longer considered redactable information. #83553

Command-line changes

  • The cockroach demo command now enables rangefeeds by default. You can restore the previous behavior by starting the command with the --auto-enable-rangefeeds=false flag. #83344

DB Console changes

  • The DB Console has a more helpful error message when the Jobs page times out, and an information message appears after 2 seconds of loading and indicates that the loading might take a while. Previously, it would show the message Promise timed out after 30000 ms. #82722
  • The Statement Details page was renamed to Statement Fingerprint. The Statement Fingerprint page now shows charts for: Execution and Planning Time, Rows Processed, Execution Retries, Execution Count, and Contention. #82960
  • The time interval component on the Statements and Transactions pages has been added to the Statement Fingerprint Overview and Explain Plans tabs, and the Transaction Details page. #82721
  • Added a confirmation modal to the reset SQL Stats button. #83142
  • Application names and database names are now sorted in the dropdown menus. #83334
  • A new single column called Rows Processed, displayed by default, combines the columns rows read and rows written on the Statements and Transactions pages. #83511
  • The time interval selected on the Metrics page and the SQL Activity pages are now aligned. If the user changes the time interval on one page, the value will be the same for the other. #83507
  • Added a label to the Statement, Statement Fingerprint, and Transaction pages, with information about the time interval for which we're showing information. The Execution Stats tab was removed from the Statement Fingerprint page. #83333
  • Removed the 10 and 30 minute options on the SQL Activity page. #83542
  • On the Statements page, users can no longer filter statements by searching for text in the EXPLAIN plan. #83652
  • Updated the tooltips on the Statements and Transactions pages in the DB Console for improved user experience. #83540

Bug fixes

  • Fixed a bug where, in earlier v22.1 releases, added validation could cause problems for descriptors which carried invalid back references due to a previous bug in v21.1. This stricter validation could result in a variety of query failures. CockroachDB now weakens the validation to permit the corruption. A subsequent fix in v22.2 is scheduled that will repair the invalid reference. #82859
  • Added missing support for preparing a DECLARE cursor statement with placeholders. #83001
  • CockroachDB now treats node unavailable errors as retry-able changefeed errors. #82874
  • CockroachDB now ensures running changefeeds do not inhibit node shutdown. #82874
  • Last Execution time now shows the correct value on Statement Fingerprint page. #83114
  • CockroachDB now uses the proper multiplying factor to contention value on Statement Details page. #82960
  • CockroachDB now prevents disabling TTL with ttl = 'off' to avoid conflicting with other TTL settings. To disable TTL, use RESET (ttl). #83216
  • Fixed a panic that could occur if the inject_retry_errors_enabled cluster setting is true and an INSERT is executed outside of an explicit transaction. #83193
  • Previously, a user could be connected to a database but unable to see the metadata for that database in pg_catalog if the user did not have privileges for the database. Now, users can always see the pg_catalog metadata for a database they are connected to (see #59875). #83360
  • The Statement Fingerprint page now finds the stats when the unset application filter is selected. #83334
  • Fixed a bug where no validation was performed when adding a virtual computed column which was marked NOT NULL. This meant that it was possible to have a virtual computed column with an active NOT NULL constraint despite having rows in the table for which the column was NULL. #83353
  • Fixed the behavior of the soundex function when passed certain Unicode inputs. Previously, certain Unicode inputs could result in crashes, errors, or incorrect output. #83435
  • Fixed a bug where a lock could be held for a long period of time when adding a new column to a table (or altering a column type). This contention could make the Jobs page non-responsive and job adoption slow. #83306
  • Fixed a bug where a panic could occur during server startup when restarting a node which is running a garbage collection job. #83474
  • The period selected on the Metrics page time picker is preserved when refreshing the page, and no longer changes to a custom period. #83507
  • Changefeeds no longer error out when attempting to checkpoint during intermediate pause-requested or cancel-requested states. #83569
  • CockroachDB now retries S3 operations when they error out with a read connection reset error instead of failing the top-level job. #83581
  • The Statements table for a transaction in the Transaction Details page now shows the correct number of statements for a transaction. #83651
  • Fixed a bug that prevented partial indexes from being used in some query plans. For example, a partial index with a predicate WHERE a IS NOT NULL was not previously used if a was a NOT NULL column. #83241
  • Index joins now consider functional dependencies from their input when determining equivalent columns instead of returning an internal error. #83549
  • An error message that referred to a non-existent cluster setting now refers to the correct cluster setting: bulkio.backup.deprecated_full_backup_with_subdir.enabled. #81976
  • Previously, the CREATE statement for the crdb_internal.cluster_contended_keys view was missing the crdb_internal.table_indexes.descriptor_id = crdb_internal.cluster_contention_events.table_id JOIN condition, resulting in the view having more rows than expected. Now, the view properly joins the crdb_internal.cluster_contention_events and crdb_internal.table_indexes tables with all necessary JOIN conditions. #83523
  • Fixed a bug where ADD COLUMN or DROP COLUMN statements with the legacy schema changer could fail on tables with large rows due to exceeding the Raft command maximum size. #83816

Performance improvements

  • This release significantly improves the performance of IMPORT statements when the source is producing data not sorted by the destination table's primary key, especially if the destination table has a very large primary key with lots of columns. #82746
  • Decommissioning nodes is now substantially faster, particularly for small to moderately loaded nodes. #82680
  • Queries with filters containing tuples in = ANY expressions, such as (a, b) = ANY(ARRAY[(1, 10), (2, 20)]), are now index accelerated. #83467
  • Fixed a bug where it was possible to accrue MVCC garbage for much longer than needed. #82967

Contributors

This release includes 70 merged PRs by 36 authors.

v22.1.2

Release Date: June 22, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.2

Enterprise edition changes

  • CSV is now a supported format for changefeeds. This only works with initial_scan='only' and does not work with diff/resolved options. #82355

SQL language changes

  • The bulkio.ingest.sender_concurrency_limit cluster setting can be used to adjust the concurrency at which any one SQL node, across all operations that it is running (e.g., RESTOREs, IMPORTs, and schema changes), will send bulk ingest requests to the KV storage layer. #81789
  • The sql.ttl.range_batch_size cluster setting is deprecated. #82711
  • The pgwire DESCRIBE command is now supported for use against a cursor created with the DECLARE command. This improves compatibility with PostgreSQL and is needed for compatibility with psycopg3 server-side cursors. #82772
  • Fixed an issue where SHOW BACKUP with privileges displayed grant statements with incorrect syntax (specifically, without the object type). As an example, previously displayed: GRANT ALL ON status TO j4; Now displayed: GRANT ALL ON TYPE status TO j4; #82727
  • Added the spanconfig.kvsubscriber.update_behind_nanos metric to track the latency between realtime and the last update handled by the KVSubscriber. This metric can be used to monitor the staleness of a node's view of reconciled spanconfig state. #82895

DB Console changes

  • The time picker component has been improved such that users can use keyboard input to select a time without having to type " (UTC)". #82495
  • The time picker now opens directly to the custom time selection menu when a custom time is already selected. A "Preset Time Ranges" navigation link has been added to go to the preset options from the custom menu. #82495

Bug fixes

  • The output of SHOW CREATE VIEW now properly includes the keyword MATERIALIZED for materialized views. #82196
  • Fixed the identity_generation column in the information_schema.columns table so its value is either BY DEFAULT, ALWAYS, or NULL. #82184
  • Disk write probes during node liveness heartbeats will no longer get stuck on stalled disks, instead returning an error once the operation times out. Additionally, disk probes now run in parallel on nodes with multiple stores. #81476
  • Fixed a bug where an unresponsive node (e.g., a node with a stalled disk) could prevent other nodes from acquiring its leases, effectively stalling these ranges until the node was shut down or recovered. #81815
  • Fixed a crash that could happen when preparing a statement with unknown placeholder types. #82647
  • Previously, when adding a column to a pre-existing table and adding a partial index referencing that column in the transaction, DML operations against the table while the schema change was ongoing would fail. Now these hazardous schema changes are not allowed. #82668
  • Fixed a bug where CockroachDB would sometimes automatically retry the BEGIN statement of an explicit transaction. #82681
  • Fixed a bug where draining/drained nodes could re-acquire leases during an import or an index backfill. #80834
  • Fixed a bug where the startup of an internal component after a server restart could result in the delayed application of zone configuration. #82858
  • Previously, using AS OF SYSTEM TIME of two different statements in the same line would result in an assertion error. This is now a PG error with code 0A000. #82654
  • Fixed a bug where KV requests, in particular export requests issued during a backup, were rejected incorrectly causing the backup to fail with a batch timestamp <ts> must be after replica GC threshold error. The requests were rejected on the pretext that their timestamp was below the garbage collection threshold of the key span. This was because the protected timestamps were not considered when computing the garbage collection threshold for the key span being backed up. Protected timestamp records hold up the garbage collection threshold of a span during long-running operations such as backups to prevent revisions from being garbage collected. #82757

Contributors

This release includes 54 merged PRs by 31 authors. We would like to thank the following contributors from the CockroachDB community:

  • likzn (first-time contributor)

v22.1.1

Release Date: June 6, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.1

Security updates

General changes

  • When using Azure Storage for data operations, CockroachDB now calculates the storage account URL from the provided AZURE_ENVIRONMENT query parameter. If not specified, this defaults to AzurePublicCloud to maintain backward compatibility. This parameter should not be used when the cluster is in a mixed-version or upgrading state, as nodes that have not been upgraded will continue to send requests to AzurePublicCloud even in the presence of this parameter. #80801

Enterprise edition changes

  • Previously, backups in the base directory of a Google Cloud Storage bucket would not be discovered by SHOW BACKUPS. These backups will now appear correctly. #80493
  • Changefeeds to Google Cloud Platform no longer require topic creation permission if all topics being written to already exist. #81684

SQL language changes

  • ttl_job_cron is now displayed on SHOW CREATE TABLE and the specified reloptions by default. #80292
  • Added the crdb_internal.cluster_locks virtual table, which exposes the current state of locks on keys tracked by concurrency control. The virtual table displays metadata on locks currently held by transactions, as well as operations waiting to obtain the locks, and as such can be used to visualize active contention. The VIEWACTIVITY or VIEWACTIVITYREDACTED role option, or the admin role, is required to access the virtual table; however, if the user only has the VIEWACTIVITYREDACTED role option, the key on which a lock is held will be redacted. #80517
  • BACKUP, IMPORT, and RESTORE jobs will be paused instead of entering a failed state if they continue to encounter transient errors once they have retried a maximum number of times. The user is responsible for cancelling or resuming the job from this state. #80434
  • Added a sql.conn.failures counter metric that shows the number of failed SQL connections. #80987
  • Constraints that only include hidden columns are no longer excluded in SHOW CONSTRAINTS. This behavior can be changed using the show_primary_key_constraint_on_not_visible_columns session variable. #80637
  • Added a sql.txn.contended.count metric that exposes the total number of transactions that experienced contentions. #81070
  • Automatic statistics collection can now be enabled or disabled for individual tables, taking precedence over the sql.stats.automatic_collection.enabled, sql.stats.automatic_collection.fraction_stale_rows, or sql.stats.automatic_collection.min_stale_rows cluster settings. The table settings may be set at table creation time, or later via ALTER TABLE ... SET. Note that any row mutations which occurred a minute or two before disabling automatic statistics collection via ALTER TABLE ... SET may trigger statistics collection, but DML statements submitted after the setting change will not. #81019
  • Added a new session variable, enable_multiple_modifications_of_table, which can be used instead of the cluster variable sql.multiple_modifications_of_table.enabled to allow statements containing multiple INSERT ON CONFLICT, UPSERT, UPDATE, or DELETE subqueries modifying the same table. As with sql.multiple_modifications_of_table.enabled, be warned that with this session variable enabled, there is nothing to prevent the table corruption seen in issue #70731 from occuring if the same row is modified multiple times by different subqueries of a single statment. It is best to rewrite these statements, but the session variable is provided as an aid if this is not possible. #79930
  • Fixed a small typo when using DateStyle and IntervalStyle. #81550
  • Added an is_grantable column to SHOW GRANTS FOR {role} for consistency with other SHOW GRANTS commands. #81820
  • Improved query performance for crdb_internal.cluster_locks when issued with constraints in the WHERE clause on table_id, database_name, or table_name columns. #81261

Operational changes

Command-line changes

  • The standalone SQL shell executable cockroach-sql can now be installed (renamed/symlinked) as cockroach, and invoked via cockroach sql. For example, the following commands are all equivalent: cockroach-sql -f foo.sql, cockroach-sql sql -f foo.sql; and after running ln -s cockroach-sql cockroach, cockroach sql -f foo.sql. #80930
  • Added a new flag --advertise-http-addr, which explicitly sets the HTTP advertise address that is used to display the URL for DB Console access and for proxying HTTP connections between nodes as described in #73285. It may be necessary to set --advertise-http-addr in order for these features to work correctly in some deployments. Previously, the HTTP advertise address was derived from the OS hostname, the --advertise-addr, and the --http-addr flags, in that order. The new logic will override the HTTP advertise host with the host from --advertise-addr first if set, and then the host from --http-addr. The port will never be inherited from --advertise-host and will only be inherited from --http-addr, which is 8080 by default. #81316
  • If node decommissioning is slow or stalls, the descriptions of some "stuck" replicas are now printed to the operator. #79516
  • cockroach debug zip now includes system tables using a denylist instead of an allowlist. #81383

DB Console changes

  • Added more job types to the Type filter on the Jobs page. #80128
  • Added a dropdown filter on the Node Diagnostics page to view by Active, Decomissioned, or All nodes. #80320
  • The custom selection in the time picker on the Metrics dashboards, SQL Activity page, and other DB Console pages now defaults to the currently selected time. #80794
  • Updated all dates to use 24h format in UTC. #81747
  • Fixed the size of the table area on the Statements and Transactions pages to prevent cutting off the columns selector and filters. #81746
  • The Job status on the Jobs page of the DB Console will now show a status column for changefeed jobs and display the highwater_timestamp value in a separate column. Thise more closely matches the SQL output of SHOW CHANGEFEED JOBS. The highwater timestamp now displays as the nanosecond system time value by default, with the human-readable value in the tooltip, since the decimal value is copy/pasted more often. #81757
  • The tooltip for a Session's status on the Sessions page has been updated with a more explicit definition: A session is Active if it has an open explicit or implicit transaction (individual SQL statement) with a statement that is actively running or waiting to acquire a lock. A session is Idle if it is not executing a statement. #81904

Bug fixes

  • Previously, CockroachDB could lose the INT2VECTOR and OIDVECTOR type of some arrays. This is now fixed. #78581
  • Previously, CockroachDB could encounter an internal error when evaluating queries with OFFSET and LIMIT clauses when the addition of the offset and the limit value would be larger than int64 range. This is now fixed. #79878
  • Previously, a custom time-series metric sql.distsql.queries.spilled was computed incorrectly, leading to an exaggerated number. This is now fixed. #79882
  • Fixed a bug where NaN coordinates when using ST_Intersects/ST_Within/ST_Covers would return true instead of false for point-in-polygon operations. #80202
  • Added a detailed error message for index out of bounds when decoding a binary tuple datum. This does not fix the root cause, but should give more insight into what is happening. #79933
  • Fixed a bug where ST_MinimumBoundingCircle would panic with infinite coordinates and a num_segments argument. #80347
  • Addressed an issue where automatic encryption-at-rest data key rotation would be disabled after a node restart without a store key rotation. #80563
  • Fixed the formatting/printing behavior for ALTER DEFAULT PRIVILEGES, which will correct some mistaken error messages. #80327
  • Fixed a bug whereby the cluster version could regress due to a race condition. #80711
  • Fixed a rare crash which could occur when restarting a node after dropping tables. #80572
  • Previously, in very rare circumstances, CockroachDB could incorrectly evaluate queries with an ORDER BY clause when the prefix of ordering was already provided by the index ordering of the scanned table. #80714
  • Index recommendations are no longer presented for system tables in the output of EXPLAIN statements. #80952
  • Fixed a goroutine leak when internal rangefeed clients received certain kinds of retriable errors. #80798
  • Fixed a bug that allowed duplicate constraint names for the same table if the constraints were on hidden columns. #80637
  • Errors encountered when sending rebalancing hints to the storage layer during IMPORTs and index creation are now only logged, and no longer cause the job to fail. #80469
  • Fixed a bug where if a transaction's commit time is pushed forward from its initial provisional time, an enclosing CREATE MATERIALIZED VIEW AS ... might fail to find other descriptors created in the same transaction during the view's backfill stage. The detailed descriptor of this bug is summarized in issue #79015. #80908
  • Contention statistics are now collected for SQL statistics when tracing is enabled. #81070
  • Fixed a bug in row-level TTL where the last range key of a table may overlap with a separate table or index, resulting in an error decoding X bytes error message when performing row-level TTL. #81262
  • Fixed a bug where format_type on the void type resulted in an error. #81323
  • Fixed a bug in which some prepared statements could result in incorrect results when executed. This could occur when the prepared statement included an equality comparison between an index column and a placeholder, and the placholder was cast to a type that was different from the column type. For example, if column a was of type DECIMAL, the following prepared query could produce incorrect results when executed: SELECT * FROM t_dec WHERE a = $1::INT8; #81345
  • Fixed a bug where ST_MinimumBoundingCircle with NaN coordinates could panic. #81462
  • Fixed a panic that was caused by setting the tracing session variable using SET LOCAL or ALTER ROLE ... SET. #81505
  • Fixed a bug where GRANT ALL TABLES IN SCHEMA would not resolve the correct database name if it was explicitly specified. #81553
  • Previously, cancelling COPY commands would show an XXUUU error, instead of 57014. This is now fixed. #81595
  • Fixed a bug that caused errors with the message unable to vectorize execution plan: unhandled expression type in rare cases. This bug had been present since v21.2.0. #81591
  • Fixed a bug where changefeeds could fail permanently if encountering an error while planning their distribution, even though such errors are usually transient. #81685
  • Fixed a gap in disk-stall detection. Previously, disk stalls during filesystem metadata operations could go undetected, inducing deadlocks. Now stalls during these types of operations will correctly fatal the process. #81752
  • Fixed an issue where the encryptionStatus field on the Stores debug page of the DB Console would display an error instead of displaying encryption details when encryption-at-rest is enabled. #81500
  • In v21.1, a bug was introduced whereby default values were recomputed when populating data in new secondary indexes for columns which were added in the same transaction as the index. This would arise, for example, in cases like ALTER TABLE t ADD COLUMN f FLOAT8 UNIQUE DEFAULT (random()). If the default expression was not volatile, then the recomputation was harmless. If, however, the default expression was volatile, the data in the secondary index would not match the data in the primary index: a corrupt index would have been created. This bug has now been fixed. #81549
  • Previously, when running ALTER DEFAULT PRIVILEGES IN SCHEMA {virtual schema}, a panic occured. This now returns the error message {virtual schema} is not a physical schema. #81704
  • Previously, CockroachDB would encounter an internal error when executing queries with lead or lag window functions when the default argument had a different type than the first argument. This is now fixed. #81756
  • Fixed an issue where a left lookup join could have incorrect results. In particular, some output rows could have non-NULL values for right-side columns when the right-side columns should have been NULL. This issue only existed in v22.1.0 and prior development releases of v22.1. #82076
  • The Statements and Transactions pages no longer crash when a search term includes *. #82085
  • The special characters * and ^ are no longer highlighted when searching on the Statements and Transactions pages. #82085
  • Previously, if materialized view creation failed during the backfill stage, CockroachDB would properly clean up the view but not any of the back references. Back and forward references for materialized views are now cleaned up. #82099
  • Fix a bug where \copy in the CLI would panic. #82197
  • Fixed a bug introduced in v21.2 where the sql-stats-compaction job had a chance of not being scheduled during an upgrade from v21.1 to v21.2, causing persisted statement and transaction statistics to be enabled without memory accounting. #82283
  • Fixed an edge case where VALUES clauses with nested tuples could fail to be type-checked properly in rare cases. #82298
  • The CREATE SEQUENCE ... AS statement now returns a valid error message when the specified type name does not exist. #82322
  • The SHOW STATISTICS output no longer displays statistics involving dropped columns. #82315
  • Fixed a bug where changefeeds created before upgrading to v22.1 would silently fail to emit any data other than resolved timestamps. #82312
  • Fixed a rare crash indicating a nil-pointer deference in google.golang.org/grpc/internal/transport.(*Stream).Context(...). #80911

Performance improvements

  • Bulk ingestion of unsorted data during IMPORT and schema changes uses a higher level of parallelism to send produced data to the storage layer. #80386

Docker

  • Refactored the initialization process of the Docker image to accomodate initialization scripts with memory storage. #80355

Contributors

This release includes 183 merged PRs by 55 authors. We would like to thank the following contributors from the CockroachDB community:

  • Nathan Lowe (first-time contributor)

v22.1.0

Release Date: May 24, 2022

With the release of CockroachDB v22.1, we've made a variety of management, performance, security, and compatibility improvements. Check out a summary of the most significant user-facing changes and then upgrade to CockroachDB v22.1. For a release announcement with further focus on key features, see the v22.1 blog post.

We're running a packed schedule of launch events over the next few weeks, which include two opportunities to win limited-edition swag. Join our Office Hours session for all your questions, a coding livestream where we'll play with new features, and a live talk on building and preparing for scale.

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach:v22.1.0

CockroachDB Cloud

Feature summary

This section summarizes the most significant user-facing changes in v22.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the release notes for previous testing releases. You can also search for what's new in v22.1 in our docs.

Note:

"Core" features are freely available in the core version of CockroachDB and do not require an enterprise license. "Enterprise" features require an enterprise license. CockroachDB Cloud clusters include all enterprise features. You can also use cockroach demo to test enterprise features in a local, temporary cluster.

SQL

Version Feature Description
Core Hash-sharded indexes Hash-sharded indexes distribute sequential data across multiple nodes within your cluster, eliminating hotspots in certain types of scenarios. This feature is now generally available (GA) after a previous experimental release.
Core Super regions Super regions allow you to define a set of regions on the database such that any REGIONAL BY TABLE table based in the super region or any REGIONAL BY ROW partition in the super region will have all their replicas in regions that are also within the super region. Their primary use is for data domiciling. This feature is in preview release.
Core Support for AWS DMS Support for AWS Database Migration Service (AWS DMS) allows users to migrate data from an existing database to CockroachDB.
Core Admission control Admission control helps maintain cluster performance and availability when some nodes experience high load. This was previously available as a preview release but is now generally available and enabled by default.
Core Set a quality of service (QoS) level for SQL sessions with admission control In an overload scenario where CockroachDB cannot service all requests, you can identify which requests should be prioritized by setting a quality of service (QoS). Admission control queues work throughout the system. You can set the QoS level on its queues for SQL requests submitted in a session to background, regular, or critical.
Core Rename objects within the transaction that creates them It is now possible to swap names for tables and other objects within the same transaction that creates them. For example: CREATE TABLE foo(); BEGIN; ALTER TABLE foo RENAME TO bar; CREATE TABLE foo(); COMMIT;
Core Drop ENUM values using ALTER TYPE...DROP VALUE Drop a specific value from the user-defined type's list of values. The ALTER TYPE...DROP VALUE statement is now available by default to all instances. It was previously disabled by default, requiring the cluster setting enable_drop_enum_value to enable it.
Core Support the UNION variant for recursive CTE For compatibility with PostgreSQL, WITH RECURSIVE...UNION statements are now supported in recursive common table expressions.
Core Locality optimized search supports LIMIT clauses Queries with a LIMIT clause on a single table, either explicitly written or implicit such as in an uncorrelated EXISTS subquery, now scan that table with improved latency if the table is defined with LOCALITY REGIONAL BY ROW and the number of qualified rows residing in the local region does not exceed the hard limit (the sum of the LIMIT clause and optional OFFSET clause values). This optimization is only applied if the hard limit is 100,000 or less.
Core Surface errors for testing retry logic To help enable developers test their application's retry logic, they can set the session variable inject_retry_errors_enabled so that any statement that is a not a SET statement will return a transaction retry error if it is run inside of an explicit transaction.
Core Row Level TTL (preview release) With Time to Live ("TTL") expiration on table rows, also known as Row-Level TTL, CockroachDB automatically deletes rows once they have been stored longer than their specified expiration time. This avoids the complexities and potential performance impacts of managing expiration at the application level. See the documentation for Limitations that are part of this preview release.
Core DATE and INTERVAL style settings available by default The session variables datestyle_enabled and intervalstyle_enabled, and the cluster settings sql.defaults.datestyle.enabled and sql.defaults.intervalstyle.enabled no longer have any effect. When the upgrade to v22.1 is finalized, all of these settings are effectively interpreted as true, enabling the use of the intervalstyle and datestyle session and cluster settings.
Core Optimized node draining with connection_wait If you cannot tolerate connection errors during node drain, you can now change the server.shutdown.connection_wait cluster setting to allow SQL client connections to gracefully close before CockroachDB forcibly closes them. For guidance, see Node Shutdown.
Core PostgreSQL wire protocol query cancellation In addition to the CANCEL QUERY SQL statement, developers can now use the cancellation method specified by the PostgreSQL wire protocol.
Core Gateway node connection limits To control the maximum number of non-superuser (root user or other admin role) connections a gateway node can have open at one time, use the server.max_connections_per_gateway cluster setting. If a new non-superuser connection would exceed this limit, an error message and code are returned.
Core Support for WITH GRANT OPTION privilege See Security.
Core Transaction contention events Transaction contention events enable you to determine where contention is occurring in real-time for affected active statements, and historically for past statements.
Core Index recommendations Index recommendations indicate when your query would benefit from an index and provide a suggested statement to create the index.

Developer Experience

Version Feature Description
Core Support for Prisma CockroachDB now supports the Prisma ORM. A new tutorial and example app are available.
Core Lightweight cockroach-sql executable A new client-only SQL shell for users who do not operate the cluster themselves.

Recovery and I/O

Version Feature Description
Enterprise Alter changefeeds The new SQL statement ALTER CHANGEFEED enables users to modify active changefeeds, preventing the need to start a new changefeed.
Enterprise Track metrics per changefeed Create labels for capturing a metric across one or more specified changefeeds. This is an experimental feature that you can enable using a cluster setting.
Core Changefeed support for tables with multiple column families Changefeeds can now target tables with more than one column family using either the split_column_families option or the FAMILY keyword. Changefeeds will emit individual messages per column family on a table.
Enterprise Stream data to Google Cloud Pub/Sub Changefeeds can now stream data to a Pub/Sub sink.
Core Export to the Apache Parquet format Using a SQL EXPORTstatement, users can now choose to export data to the Parquet format.
Core Backup encryption enhancements See Security.
Core Select an S3 storage class for backups Associate your backup objects with a specific storage class in your Amazon S3 bucket.
Core Exclude a table's data from backups Exclude a table's row data from a backup. This may be useful for tables with high-churn data that you would like to garbage collect more quickly than the incremental backup schedule.
Core Store incremental backups in custom locations Specify a different storage location for incremental backups using the new BACKUP option incremental_location. This makes it easier to retain full backups longer than incremental backups, as is often required for compliance reasons.
Core Rename database on restore An optional new_db_name clause on RESTORE DATABASE statements allows the user to rename the database they intend to restore. This can be helpful in disaster recovery scenarios when restoring to a temporary state.

Database operations

Version Feature Description
Core DB Console access from a specified node On the Advanced Debug page, DB Console access can be routed from the currently accessed node to a specific node on the cluster.
Core Alerting rules Every CockroachDB node exports an alerting rules template at http://<host>:<http-port>/api/v2/rules/. These rule definitions are formatted for easy integration with Prometheus' Alertmanager.
Core NOSQLLOGIN role option The NOSQLLOGIN role option grants a user access to the DB Console without also granting SQL shell access.
Core Hot ranges observability The Hot Ranges page of the DB Console provides details about ranges receiving a high number of reads or writes.
Core Per-replica circuit breakers When individual ranges become temporarily unavailable, requests to those ranges are refused by a per-replica "circuit breaker" mechanism instead of hanging indefinitely.

Security

Version Feature Description
Core Support of Google Cloud KMS for encrypted backups Google Cloud KMS is now supported as a key management system for encrypted BACKUP and RESTORE operations.
Enterprise Rotate backup encryption keys Keep your backups secure by rotating the AWS or Google Cloud KMS keys you use to encrypt your backups and adding them to an existing key chain using the new ALTER BACKUP statement.
Core Support for WITH GRANT OPTION privilege Users granted a privilege with WITH GRANT OPTION can in turn grant that privilege to others. The owner of an object implicitly has the GRANT OPTION for all privileges, and the GRANT OPTION is inherited through role memberships. This matches functionality offered in PostgreSQL.
Core Support client-provided password hashes for credential definitions CockroachDB now recognizes pre-computed password hashes when presented to the regular PASSWORD option when creating or updating a role.
Core Support SCRAM-SHA-256 SASL authentication method CockroachDB is now able to authenticate users via the DB Console and SQL sessions when the client provides a cleartext password and the stored credentials are encoded using the SCRAM-SHA-256 algorithm. For SQL client sessions, authentication methods password (cleartext passwords) and cert-password (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials can now be used. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication.
Core Support HSTS headers to enforce HTTPS Clusters can now be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of MitM attacks, to which HTTP-to-HTTPS redirects are vulnerable.

Backward-incompatible changes

Before upgrading to CockroachDB v22.1, be sure to review the following backward-incompatible changes and adjust your deployment as necessary.

  • Using SESSION_USER in a projection or WHERE clause now returns the SESSION_USER instead of the CURRENT_USER. For backward compatibility, use session_user() for SESSION_USER and current_user() for CURRENT_USER. #70444
  • Placeholder values (e.g., $1) can no longer be used for role names in ALTER ROLE statements or for role names in CREATE ROLE/DROP ROLE statements. #71498
  • Support has been removed for:
    • IMPORT TABLE ... CREATE USING
    • IMPORT TABLE ... <non-bundle-format> DATA refers to CSV, Delimited, PGCOPY, or AVRO. These are formats that do not define the table schema in the same file as the data. The workaround following this change is to use CREATE TABLE with the same schema that was previously being passed into the IMPORT statement, followed by an IMPORT INTO the newly created table.
  • Non-standard cron expressions that specify seconds or year fields are no longer supported. #74881
  • Changefeeds will now filter out virtual computed columns from events by default. #74916
  • The environment variable that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from COCKROACH_MAX_BCRYPT_CONCURRENCY to COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY. Its semantics remain unchanged. #74301
  • The volatility of cast operations between strings and intervals or timestamps has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following built-in functions: parse_interval, parse_date, parse_time, parse_timetz, parse_timestamp, or to_char. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. #78455
  • Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new bulkio.backup.deprecated_full_backup_with_subdir cluster setting to true. #80145

Deprecations

  • Using the cockroach node drain command without specifying a node ID is deprecated. #73991
  • The flag --self of the cockroach node decommission command is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. #74319
  • The experimental_enable_hash_sharded_indexes session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. #78038
  • The BACKUP TO syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the BACKUP INTO syntax. #78250
  • Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new bulkio.backup.deprecated_full_backup_with_subdir cluster setting to true. #80145
  • SHOW BACKUP without the IN keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with SHOW BACKUP FROM <backup> IN <collection>. #79116
  • Using the RESTORE FROM syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use RESTORE FROM <backup> IN <collection> to restore a particular backup in a collection. #78250

Known limitations

For information about new and unresolved limitations in CockroachDB v22.1, with suggested workarounds where applicable, see Known Limitations.

Education

Area Topic Description
Cockroach University New Java Course Event-Driven Architecture for Java Developers teaches you how to handle message queues by building the transactional outbox pattern into your application using CockroachDB's built-in Change Data Capture feature.
Cockroach University New SQL for Application Developers Courses The new SQL for Application Developers skill path helps developers new to SQL learn how to model their application object relationships in a relational database and use transactions. Its first two courses, now available as a limited preview, are Getting Started With SQL for Application Developers and Modeling Object Relationships in SQL.
Docs CockroachDB Cloud Guidance New docs on how to use the Cloud API to programmatically manage the lifecycle of clusters within your organization, how to use the ccloud command to create, manage, and connect to CockroachDB Cloud clusters, and how to do performance benchmarking with a CockroachDB Serverless cluster.
Docs Improved SQL Guidance New documentation on transaction guardrails via limiting the number of rows written or read in a transaction and improved content on the use of indexes in performance recipesand secondary indexes.
Docs New ORM tutorials and sample apps for CockroachDB Serverless Tutorials for AWS Lambda, Knex.JS, Prisma, Netlify, and Vercel.
Docs Additional developer resources Best practices for serverless functions and testing/CI environments, and a new client connection reference page with CockroachDB Serverless, Dedicated, and Self-Hosted connection strings for fully-supported drivers/ORMs.
Docs Improvements for operators How to Choose a Deployment Option helps you choose between our managed services (CockroachDB Serverless and Dedicated) and CockroachDB Self-Hosted. In addition to new Sizing guidance on the Production Checklist, we have also updated production, monitoring/alerting, and troubleshooting guidance across docs, centered on a new Common Issues to Monitor page.
Docs Security doc improvements We have restructured and improved the Security section, including supported authentication methods.
Docs Content overhauls Stream Data (Changefeeds) and Performance docs have also been restructured and improved.
Docs Improved release notes Release notes (What's New? pages) are now compiled to one page per major version.
Docs New Glossary The new Glossary page under the Get Started section of the docs compiles two existing glossaries and includes additional definitions for terms commonly found within the docs.
Docs New Nav The new navigation menu structure for the docs better classifies types of user tasks.

v22.1.0-rc.1

Release Date: May 9, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-rc.1

Bug fixes

  • Fixed a very rare case where CockroachDB could incorrectly evaluate queries with an ORDER BY clause when the prefix of ordering was already provided by the index ordering of the scanned table. #80715

  • Fixed a rare crash when encountering a nil-pointer deference in google.golang.org/grpc/internal/transport.(*Stream).Context(...). #80936

Contributors

This release includes 3 merged PRs by 3 authors.

v22.1.0-beta.5

Release Date: May 3, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-beta.5

Security updates

  • crdb_internal.reset_sql_stats() and crdb_internal.reset_index_usage_stats() built-in functions now check if the user has the admin role. #80384

  • SCRAM authentication and password encryption are not enabled by default. #80248

Enterprise edition changes

  • Backups run by secondary tenants now write protected timestamp records to protect their target schema objects from garbage collection during backup execution. #80670

SQL language changes

  • The cluster settings cloudstorage.<provider>.read.node_rate_limit and cloudstorage.<provider>.read.node_burst_limit can now be used to limit throughput when reading from cloud storage during a RESTORE or IMPORT. #80281

Bug fixes

  • Fixed a bug where automatic encryption-at-rest data key rotation would become disabled after a node restart without a store key rotation. #80564

  • Fixed a bug whereby the cluster version could regress due to a race condition. #80712

Performance improvements

  • Bulk ingestion of unsorted data during IMPORT and schema changes now uses a higher level of parallelism to send produced data to the storage layer. #80487

Miscellaneous

Docker

  • Refactored the initialization process of the Docker image to accommodate the use case with memory storage. #80558

Contributors

This release includes 29 merged PRs by 20 authors.

v22.1.0-beta.4

Release Date: April 26, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-beta.4

Backward-incompatible changes

  • Users can no longer define the subdirectory of their full backup. This deprecated syntax can be enabled by changing the new bulkio.backup.deprecated_full_backup_with_subdir cluster setting to true. #80145

SQL language changes

  • Introduced a new cluster setting, sql.multi_region.allow_abstractions_for_secondary_tenants.enabled, to allow the operator to control if a secondary tenant can make use of multi-region abstractions. #80013
  • Introduced new cloudstorage.<provider>.write.node_rate_limit and cloudstorage.<provider>.write.node_burst_limit cluster settings to limit the rate at which bulk operations write to the designated cloud storage provider. #80243

Command-line changes

DB Console changes

  • Statements are no longer separated by aggregation interval on the Statement Page. Now, all statements with the same fingerprint show as a single row. #80137

Operational changes

  • If a user does not pass a subdirectory in their backup command, CockroachDB will only ever attempt to create a full backup. Previously, a backup command with AS OF SYSTEM TIME and no subdirectory would increment on an existing backup if the AS OF SYSTEM TIME backup’s resolved subdirectory equaled the existing backup’s directory. Now, an error is thrown. #80145

Bug fixes

  • HTTP 304 responses no longer result in error logs. #79855
  • Fixed a bug that may have caused a panic if a Kafka server being written to by a changefeed failed at the wrong moment. #79908
  • Fixed a bug that would prevent CockroachDB from resolving the public schema if a changefeed is created with a cursor timestamp prior to when the public schema migration happened. #80165
  • Fixed a bug where running an AS OF SYSTEM TIME incremental backup with an end time earlier than the previous backup's end time could lead to an incremental backup chain in the wrong order. Now, an error is thrown if the time specified in AS OF SYSTEM TIME is earlier than the previous backup's end time. #80145

Performance improvements

Contributors

This release includes 38 merged PRs by 27 authors.

v22.1.0-beta.3

Release Date: April 18, 2022

Downloads

Full CockroachDB executable

SQL-only command-line client executable

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-beta.3

Enterprise edition changes

  • Unified the syntax for defining the behavior of initial scans on changefeeds by extending the initial_scan option to accept three possible values: yes, no, or only. #79471
  • Changefeeds can now target tables with more than one column family using either the split_column_families option or the FAMILY keyword. Changefeeds will emit individual messages per column family on a table. #79448
  • The full_table_name option is now supported for all changefeed sinks. #79448
  • LATEST files are no longer overwritten and are now versioned and written in the /metadata/latest directory for non-mixed-version clusters. #79553
  • Previously, the ALTER CHANGEFEED statement would not work with changefeeds that use fully qualified names in their CREATE CHANGEFEED statement. This is now fixed by ensuring that each existing target is added with its fully qualified name so that it can be resolved in validation checks. Every changefeed will now display the fully qualified name of every target in the SHOW CHANGEFEED JOB query. #79745
  • Added a changefeed.backfill.scan_request_size setting to control scan request size during backfill. #79710

SQL language changes

  • CockroachDB now ensures the user passes the same number of locality-aware URIs for the full backup destination as the incremental_location parameter (for example, BACKUP INTO LATEST IN ($1, $2, $3) WITH incremental_location = ($4, $5, $6)). #79600
  • EXPLAIN (DDL), when invoked on statements supported by the declarative schema changer, prints a plan of what the schema changer will do. This can be useful for anticipating the complexity of a schema change (for example, anything involving backfill or validation operations might be slow to run) and for troubleshooting. EXPLAIN (DDL, VERBOSE) produces a more detailed plan. #79780

Operational changes

  • Added a new time-series metric, storage.marked-for-compaction-files, for the count of files marked for compaction. This is useful for monitoring storage-level background migrations. #79370
  • Changefeed creation and failure event logs are now emitted to the TELEMETRY logging channel. #79749

Command-line changes

  • Introduced a new ttllogger workload which creates a TTL table emulating a "log" with rows expiring after the duration specified in the --ttl flag. #79482

DB Console changes

Bug fixes

  • Previously, privileges for restored tables were being generated incorrectly without taking into consideration their parent schema's default privilege descriptor. This is now fixed. #79534
  • Fixed a bug that caused an internal error when the inner expression of a column access expression evaluated to NULL. For example, evaluation of the expression (CASE WHEN b THEN ((ROW(1) AS a)) ELSE NULL END).a would error when b is false. This bug was present since v19.1 or earlier. #79529
  • Fixed a bug that caused an error when accessing a named column of a labeled tuple. The bug only occurred when an expression could produce one of several different tuples. For example, (CASE WHEN true THEN (ROW(1) AS a) ELSE (ROW(2) AS a) END).a would fail to evaluate. This bug was present since v22.1.0. Although present in previous versions, it was impossible to encounter due to limitations that prevented using tuples in this way. #79529
  • Previously, queries reading from an index or primary key on FLOAT or REAL columns DESC would read -0 for every +0 value stored in the index. This has been fixed to correctly read +0 for +0 and -0 for -0. #79533
  • Fixed some cases where a job or schema change that had encountered an error would continue to execute for some time before eventually failing. #79713
  • Previously, the optional is_called parameter of the setval function would default to false when not specified. It now defaults to true to match PostgreSQL behavior. #79779
  • On the Raft Messages page, the date picker and drag-to-zoom functionality are now fixed. #79791
  • Fixed a bug where Pebble compaction heuristics could allow a large compaction backlog to accumulate, eventually leading to high read amplification. #79597

Contributors

This release includes 49 merged PRs by 34 authors.

v22.1.0-beta.2

Release Date: April 12, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-beta.2

Enterprise edition changes

  • Job scheduler is more efficient and should no longer lock up jobs and scheduled jobs tables. #79328
  • Removed the default values from the SHOW CHANGEFEED JOBS output. #79361
  • Checkpoint files are no longer overwritten and now versioned and written side-by-side in the /progress directory. Temporary checkpoint files are no longer written. #79314
  • Changefeeds can now be distributed across pods in tenant environments. #79303

SQL language changes

  • Help text for creating indexes or primary key constraints no longer mentions BUCKET_COUNT because it can now be omitted and a default is used. #79087
  • Add support for show default privileges in schema. The SHOW DEFAULT PRIVILEGES clause now supports optionally passing a schema name: SHOW DEFAULT PRIVILEGES [opt_for_role] [opt_schema_name] Example:

    SHOW DEFAULT PRIVILEGES IN SCHEMA s2
    
    ----
    role      for_all_roles  object_type  grantee    privilege_type
    testuser  false          tables       testuser2  DROP
    testuser  false          tables       testuser2  SELECT
    testuser  false          tables       testuser2  UPDATE
    
    SHOW DEFAULT PRIVILEGES FOR ROLE testuser IN SCHEMA s2
    
    ----
    role      for_all_roles  object_type  grantee    privilege_type
    testuser  false          tables       testuser2  DROP
    testuser  false          tables       testuser2  SELECT
    testuser  false          tables       testuser2  UPDATE
    

    #79177

  • Add support for SHOW SUPER REGIONS FROM DATABASE. Example:

    SHOW SUPER REGIONS FROM DATABASE mr2
    
    ----
    mr2  ca-central-sr  {ca-central-1}
    mr2  test           {ap-southeast-2,us-east-1}
    

    #79190

  • When you run SHOW BACKUP on collections you must now use the FROM keyword: SHOW BACKUP FROM <subdir> IN <dest>. #79116

  • SHOW BACKUP without the IN keyword to specify a subdirectory is deprecated and will be removed from a future release. Users are recommended to only create collection based backups and view them with SHOW BACKUP FROM <backup> IN <collection>. #79116

  • Add extra logging for copy to the SQL_EXEC channel if the sql.trace.log_statement_execute cluster setting is set. #79298

  • An error message is now logged to the SQL_EXEC channel when parsing fails. #79298

  • Introduced a expect_and_ignore_not_visible_columns_in_copy session variable. If this is set, COPY FROM with no column specifiers will assume hidden columns are in the copy data, but will ignore them when applying COPY FROM. #79189

  • Changes the default value of sql.zone_configs.allow_for_secondary_tenant.enabled to be false. Moreover, this setting is no longer settable by secondary tenants. Instead, it's now a tenant read-only cluster setting. #79160

  • SHOW BACKUP now reports accurate row and byte size counts on backups created by a tenant. #79339

  • Memory and disk usage are now reported for the lookup joins in EXPLAIN ANALYZE. #79351

  • Privileges on a database are no longer inherited to tables/schemas if a table/schema is created in that database. For example, GRANT ALL ON DATABASE TEST TO foo; CREATE TABLE test.t() no longer results in foo having ALL on the table. Users should rely on default privileges instead. You can achieve the same behavior by doing USE test; ALTER DEFAULT PRIVILEGES GRANT ALL ON TABLES TO foo; #79509

  • The InvalidPassword error code is now returned if the password is invalid or the user does not exist when authenticating. #79515

Operational changes

  • The kv.allocator.load_based_rebalancing_interval cluster setting now lets operators set the interval at which each store in the cluster will check for load-based lease or replica rebalancing opportunities. #79073
  • Rangefeed memory budgets have a cluster setting kv.rangefeed.memory_budgets.enabled that disables memory budgeting for all new feeds. This setting could be used on CockroachDB Dedicated clusters to disable budgeting as a mitigation for bugs for example if feeds abort while nodes have sufficient free memory. #79321
  • Rangefeed memory budgets could be disabled on the fly when cluster setting is changed without the need to restart the feed. #79321

DB Console changes

  • Minor styling changes on Hot Ranges page to follow the same style as other pages. #79501
  • On the Statement Details page, changed the order of tabs to Overview, Explain Plan, Diagnostics, and Execution Stats and changed the Explain Plan tab to Explain Plans (plural). #79234

Bug fixes

  • Fixes a NPE during the cleanup of a failed or cancelled RESTORE job. #78992
  • Fix num_runs being incremented twice for certain jobs upon being started. #79052
  • A bug has been fixed that caused errors when trying to evaluate queries with NULL values annotated as a tuple type, such as NULL:::RECORD. This bug was present since version 19.1. #78531
  • ALTER TABLE [ADD|DROP] COLUMN are now subject to admission control, which will prevent these operations from overloading the storage engine. #79209
  • Index usage stats are now properly captured for index joins. #79241
  • SHOW SCHEMAS FROM <schema> now includes user-defined schemas. #79308
  • A distributed query that results in an error on the remote node no longer has an incomplete trace. #79193
  • IMPORT INTO no longer creates duplicate entries with UNIQUE constraints in REGIONAL BY ROW tables and tables utilizing UNIQUE WITHOUT INDEX constraints. A new post-IMPORT validation step for those tables now fails and rolls back the IMPORT in such cases. #79323
  • Fixed a bug in IO which could result in admission control failing to rate limit when traffic was stalled such that no work was admitted, despite the store's being in an unhealthy state. #79343
  • The execution time as reported on DISTSQL diagrams within the statement bundle collected via EXPLAIN ANALYZE (DEBUG) is no longer negative when the statement encountered an error. #79373
  • CockroachDB reports fewer "memory budget exceeded" errors when performing lookup joins. #79351
  • LIMIT queries with an ORDER BY clause that scan the index of a virtual system tables, such as pg_type, no longer return incorrect results. #79460
  • nextval and setval are non-transactional except when it is called in the same transaction that the sequence was created in. This change prevents a bug where creating a sequence and calling nextval and setval on it within a transaction caused the query containing nextval to hang. #79506
  • A bug has been fixed that caused the optimizer to generate query plans with logically incorrect lookup joins. The bug can only occur in queries with an inner join, e.g., t1 JOIN t2, if all of the following are true:

    • The join contains an equality condition between columns of both tables, e.g., t1.a = t2.a.
    • A query filter or CHECK constraint constrains a column to a set of specific values, e.g., t2.b IN (1, 2, 3). In the case of a CHECK constraint, the column must be NOT NULL.
    • A query filter or CHECK constraint constrains a column to a range, e.g., t2.c > 0. In the case of a CHECK constraint, the column must be NOT NULL.
    • An index contains a column from each of the criteria above, e.g., INDEX t2(a, b, c). This bug has been present since version 21.2.0. #79504
  • A bug has been fixed which caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since v21.1.0, can appear if all of the following conditions are true:

    • The query contains a semi-join, such as queries in the form SELECT * FROM a WHERE EXISTS (SELECT * FROM b WHERE a.a @> b.b).
    • The inner table has a multi-column inverted index containing the inverted column in the filter.
    • The index prefix columns are constrained to a set of values via the filter or a CHECK constraint, e.g., with an IN operator. In the case of a CHECK constraint, the column is NOT NULL. #79504

Performance improvements

  • Uniqueness checks performed for inserts into REGIONAL BY ROW tables no longer search all regions for duplicates. In some cases, these checks will now only search a subset of regions when inserting a single row of constant values. #79251
  • Bulk ingestion writes now use a lower priority for admission control. #79352
  • Browser caching of files loaded in DB Console is now supported. #79382

Contributors

This release includes 84 merged PRs by 43 authors.

v22.1.0-beta.1

Release Date: April 4, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-beta.1

Backward-incompatible changes

  • The volatility of cast operations between strings and intervals or timestamps has changed from immutable to stable. This means that these cast operations can no longer be used in computed columns or partial index definitions. Instead, use the following built-in functions: parse_interval, parse_date, parse_time, parse_timetz, parse_timestamp, or to_char. Upon upgrade to v22.1, CockroachDB will automatically rewrite any computed columns or partial indexes that use the affected casts to use the new built-in functions. #78455

Enterprise edition changes

  • Tenant GC job will now wait for protected timestamp records that target the tenant and have a protect time less than the tenant's drop time. #78389
  • Allow users to provide an end time for changefeeds through the end_time option. When this option is provided, the changefeed will run until it has reached the end timestamp the user specified, and then the changefeed job will end with a successful status code. Furthermore, we now provide an initial_scan_only option. When this option is set, the changefeed job will run until the initial scan has completed, and then end with a successful status code. #78381
  • Do not block schema changes when executing core-style changefeeds. #78360

SQL language changes

  • Added support for ALTER DATABASE ... ALTER SUPER REGION. This command allows the user to change the regions of an existing super region. For example, after successful execution of the following, super region test1 will consist of three regions, ca-central-1, us-west-1, and us-east-1.

    icon/buttons/copy

    ALTER DATABASE db3 ALTER SUPER REGION "test1" VALUES "ca-central-1", "us-west-1", "us-east-1";
    

    ALTER SUPER REGION follows the same rules as ADD or DROP super region. #78462

  • The session variables datestyle_enabled and intervalstyle_enabled, and the cluster settings sql.defaults.datestyle.enabled and sql.defaults.intervalstyle.enabled no longer have any effect. After upgrading to v22.1, these settings are effectively always interpreted as true. #78455

  • BUCKET_COUNT for hash-sharded index is now shown from the crdb_internal.table_indexes table. #78625

  • Implemented the COPY FROM ... ESCAPE ... syntax. #78417

  • Disabled index recommendations in EXPLAIN output for REGIONAL BY ROW tables, as the previous recommendations were not valid. #78676

  • Added a crdb_internal.validate_ttl_scheduled_jobs built-in function. This verifies that each table points to a valid scheduled job which will action the deletion of expired rows. #78373

  • Added a crdb_internal.repair_ttl_table_scheduled_job built-in function, which repairs the given TTL table's scheduled job by supplanting it with a valid schedule. #78373

Operational changes

Command-line changes

DB Console changes

  • The Replication Dashboard now includes a graph of snapshot bytes received per node. #78580
  • The _status/nodes endpoint is now available to all users with the VIEWACTIVITY role option, not just admins. Also, in the DB Console, the Nodes Overview and Node Reports pages will now display unredacted information containing node hostnames and IP addresses for all users with the VIEWACTIVITY role option.#78362
  • Improved colors for status badges on the Jobs page. Three statuses on the Jobs page, cancel-requested, pause-requested, and revert-failed, previously had blue status badge colors that didn't properly reflect their meaning. This commit modifies the badge colors to indicate meaning. Now cancel-requested and pause-requested have gray badges and revert-failed has a red badge. #78611
  • Fixed a bug where a node in the UNAVAILABLE state would not have latency defined, causing the network page to crash. #78628

Bug fixes

  • CockroachDB may now fetch fewer rows when performing lookup and index joins on queries with a LIMIT clause. #78473
  • Fixed a bug whereby certain catalog interactions which occurred concurrently with node failures were not internally retried. #78698
  • Fixed a bug that caused the optimizer to generate invalid query plans which could result in incorrect query results. The bug, which has been present since version v21.1.0, can appear if all of the following conditions are true:
    1. The query contains a semi-join, such as queries in the form: SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.a = t2.a);.
    2. The inner table has an index containing the equality column, like t2.a in the example query.
    3. The index contains one or more columns that prefix the equality column.
    4. The prefix columns are NOT NULL and are constrained to a set of constant values via a CHECK constraint or an IN condition in the filter. #78972
  • Fixed a bug where the LATEST file that points to the latest full backup in a collection was written to a directory path with the wrong structure. #78281

Performance improvements

  • Ranges are split and rebalanced during bulk ingestion only when they become full, reducing unnecessary splits and merges. #78328
  • Unused JS files are no longer downloaded when the DB Console loads. #78665

Contributors

This release includes 82 merged PRs by 40 authors.

v22.1.0-alpha.5

Release Date: March 28, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-alpha.5

Enterprise edition changes

  • Changefeeds now record the message size histogram. #77711
  • Users can now perform initial scans on newly added changefeed targets by executing the following statement: ALTER CHANGEFEED <job_id> ADD <targets> WITH initial_scan The default behavior is to perform no initial scans on newly added targets, but users can explicitly request this by replacing initial_scan with no_initial_scan. #77263
  • The value of the server.child_metrics.enabled cluster setting is now set to true. #77561
  • CockroachDB now limits the number of concurrent catchup scan requests issued by rangefeed clients. #77866

SQL language changes

  • TTL metrics are now labelled by relation name if the server.child_metrics.enabled cluster setting is enabled and the ttl_label_metrics storage parameter is set to true. This is to prevent a potentially unbounded cardinality on TTL related metrics. #77567
  • Added support for the MOVE command, which moves a SQL cursor without fetching any rows from it. MOVE is identical to FETCH, including in its arguments and syntax, except it doesn't return any rows. #74877
  • Added the enable_implicit_transaction_for_batch_statements session variable. It defaults to false. When true, multiple statements in a single query (a "batch statement") will all run in the same implicit transaction, which matches the PostgreSQL wire protocol. This setting is provided for users who want to preserve the behavior of CockroachDB versions v21.2 and lower. #77865
  • The enable_implicit_transaction_for_batch_statements session variable now defaults to false. #77973
  • The experimental_enable_hash_sharded_indexes session variable is deprecated as hash-sharded indexes are enabled by default. Enabling this setting results in a no-op. #78038
  • New crdb_internal.merge_stats_metadata built-in function to group statement statistics metadata. #78064
  • Changefeeds can now specify column families to target, using the syntax [TABLE] foo FAMILY bar. For example, CREATE CHANGEFEED FOR TABLE foo FAMILY bar, TABLE foo FAMILY baz, TABLE users will create a feed that watches the bar and baz column families of foo, as well as the whole table users. A family must exist with that name when the feed is created. If all columns in a watched family are dropped in an ALTER TABLE statement, the feed will fail with an error, similar to dropping a table. The behavior is otherwise similar to feeds created using split_column_families. #77964
  • Casts that are affected by the DateStyle or IntervalStyle session variables used in computed columns or partial index definitions will be rewritten to use immutable functions after upgrading to v22.1. #78229
  • When the user runs SHOW BACKUP on an encrypted incremental backup, they must set the encyrption_info_dir directory to the full backup directory in order for SHOW BACKUP to work. #78096
  • The BACKUP TO syntax to take backups is deprecated, and will be removed in a future release. Create a backup collection using the BACKUP INTO syntax. #78250
  • Using the RESTORE FROM syntax without an explicit subdirectory pointing to a backup in a collection is deprecated, and will be removed in a future release. Use RESTORE FROM <backup> IN <collection> to restore a particular backup in a collection. #78250

Command-line changes

  • Fixed a bug where starting cockroach demo with the --global flag would not simulate latencies correctly when combined with the --insecure flag. #78169

DB Console changes

Bug fixes

  • Fixed successive schema change backfills from skipping spans that were checkpointed by an initial backfill that was restarted. #77797
  • Fixed a bug where statements that arrived in a batch during the simple query protocol would all execute in their own implicit transactions. Now, we match the PostgreSQL wire protocol behavior, so all these statements share the same implicit transaction. If a BEGIN is included in a statement batch, then the existing implicit transaction is upgraded to an explicit transaction. #77865
  • Fixed a bug in the optimizer that prevented expressions of the form (NULL::STRING[] <@ ARRAY['x']) from being folded to NULL. This bug was introduced in v21.2.0. #78042
  • Fixed broken links to the Statement Details page from the Advanced Debug and Sessions pages. #78099
  • Fixed a memory leak in the Pebble block cache. #78260

Contributors

This release includes 61 merged PRs by 31 authors.

v22.1.0-alpha.4

Release Date: March 21, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-alpha.4

Security updates

  • Clusters can be configured to send HSTS headers with HTTP requests in order to enable browser-level enforcement of HTTPS for the cluster host. This is controlled by setting the server.hsts.enabled cluster setting to true (default: false). Once the headers are present, after an initial request, browsers will force HTTPS on all subsequent connections to the host. This reduces the possibility of man-in-the-middle (MitM) attacks, which HTTP-to-HTTPS redirects are vulnerable to. #77244

Enterprise edition changes

  • Added a created time column to the crdb_internal.active_range_feeds virtual table to improve observability and debuggability of the rangefeed system. #77597
  • Incremental backups created by BACKUP ... INTO or BACKUP ... TO are now stored by default under the path /incrementals within the backup destination, rather than under each backup's path. This enables easier management of cloud-storage provider policies specifically applied to incremental backups. #75970

SQL language changes

  • Added a sql.auth.resolve_membership_single_scan.enabled cluster setting, which changes the query for an internal role membership cache. Previously the code would recursively look up each role in the membership hierarchy, leading to multiple queries. With the setting on, it uses a single query. The setting is true by default. #77359
  • The data type of shard columns created for hash-sharded indexes has changed from INT4 to INT8. This should have no effect on behavior or performance. #76930
  • Introduced the sql.contention.resolver.queue_size metric. This gauge metric gives the current length of the queue of contention events, each awaiting translation of its transaction ID into a transaction fingerprint ID. This metric can be used to assess the level of backlog unresolved contention events. #77514
  • Introduced the sql.contention.resolver.retries metric. This counter metric reflects the number of retries performed by the contention event store attempting to translate the transaction ID of the contention event into a transaction fingerprint ID. Any spike in this metric could indicate a possible anomaly in the transaction ID resolution protocol. #77514
  • Introduced the sql.contention.resolver.failed_resolution metric. This counter metric gives the total number of failed attempts to translate the transaction ID in the contention events into a transaction fingerprint ID. A spike in this metric indicates likely severe failure in the transaction ID resolution protocol. #77514
  • Added support for date_trunc(string, interval) for compatibility with PostgreSQL. This built-in function is required to support Django 4.1. #77508
  • Introduced a sql.contention.event_store.duration_threshold cluster setting. This cluster setting specifies the minimum contention duration to cause the contention events to be collected into the crdb_internal.transaction_contention_events virtual table (default: 0). #77623
  • Added support for super region functionality. Super regions allow the user to define a set of regions on the database such that any REGIONAL BY TABLE based in the super region or any REGIONAL BY ROW partition in the super region will have all their replicas in regions within the super region. The primary use is for data domiciling. Super regions are an experimental feature and are gated behind the session variable: enable_super_regions. The cluster setting sql.defaults.super_regions.enabled is used to enable super regions (default: false). #76620

Operational changes

  • Added the server.shutdown.connection_wait cluster setting to the draining process configuration. This adds a draining phase where the server waits for SQL connections to be closed, and once all SQL connections are closed before timeout, the server proceeds to the next draining phase. This provides a workaround when customers encounter intermittent blips and failed requests when performing operations that are related to restarting nodes. #72991
  • The cluster settings admission.kv.tenant_weights.enabled and admission.kv.stores.tenant_weights.enabled can now be used to enable tenant weights in multi-tenant storage servers (Defaults: false). Tenant weights are based on the number of ranges for each tenant, and allow for weighted fair sharing. #77575

Command-line changes

  • The cockroach debug tsdump command now allows viewing time-series data even in case of node failures by being re-run with the import filename set to -. #77247

DB Console changes

  • Added an alert banner on the Cluster Overview page that indicates when more than one node version is detected on the cluster. The alert lists the node versions detected and how many nodes are on each version. This provides more visibility into the progress of a cluster upgrade. #76932
  • The Compactions/Flushes graph on the Storage dashboard now shows bytes written by these operations, and has been split into separate per-node graphs. #77558
  • The Explain Plan tab of the Statement Details page now shows statistics for all the plans executed by the selected statement on the selected period. #77632
  • Active operations can now be inspected in a new Active operations page linked from the Advanced Debug page. This facilitates viewing active traces and taking snapshots. #77712

Bug fixes

  • Fixed a bug where clicking the "Reset SQL stats" button on the Statements and Transactions pages caused, in DB Console, an infinite loading spinner and, in CockroachDB Cloud Console, the Statements/Transactions table to be reloaded without limiting to the time range that the user had selected. The button now correctly reloads the table according to the selected time in both DB Console and CockroachDB Cloud Console. #77571
  • Previously, the information_schema tables administrable_role_authorizations and applicable_roles were incorrectly always returning the current user for the grantee column. Now, the column will contain the correct role that was granted the parent role given in the role_name column. #77359
  • Fixed a bug that caused errors when attempting to create table statistics (with CREATE STATISTICS or EXPLAIN ANALYZE) for a table containing an index which indexed only virtual computed columns. This bug had been present since version v21.1.0. #77507
  • All automatic jobs are now hidden from the Jobs page of the DB Console. #77331
  • Added a limit of 7 concurrent asynchronous consistency checks per store, with an upper timeout of 1 hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto Pebble snapshots. #77433
  • Fixed a bug causing incorrect counts of under_replicated_ranges and over_replicated_ranges in the crdb_internal.replication_stats table for multi-region databases. #76430
  • Previously, intermittent validation failures could be observed on schema objects, where a job ID was detected as missing when validating objects in a transaction. This has been fixed. #76532
  • Previously, adding a hash-sharded index to a table watched by a changefeed could produce errors due to not distinguishing between backfills of visible columns and backfills of merely public ones, which may be hidden or inaccessible. This is now fixed. #77316
  • Fixed a bug that caused internal errors when COALESCE and IF expressions had inner expressions with different types that could not be cast to a common type. #77608
  • A zone config change event now includes the correct details of what was changed instead of incorrectly displaying undefined. #77773

Performance improvements

  • Improved the optimizer's cardinality estimates for predicates involving many constrained columns. This may result in better index selection for these queries. #76786
  • Improved jobs system resilience to scheduled jobs that may lock up jobs/scheduled job table for long periods of time. Each schedule now has a limited amount of time to complete its execution. The timeout is controlled via the jobs.scheduler.schedule_execution.timeout setting. #77372

Contributors

This release includes 112 merged PRs by 50 authors. We would like to thank the following contributors from the CockroachDB community:

  • Steve Kuznetsov (first-time contributor)

v22.1.0-alpha.3

Release Date: March 14, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-alpha.3

Enterprise edition changes

  • Altering the sink type of a changefeed is now disallowed. An attempt to change a sink type now returns an error message recommending that you create a new sink type. #77152
  • Currently executing schedules are cancelled immediately when the jobs scheduler is disabled. #77306
  • The changefeed.backfill_pending_ranges Prometheus metric was added to track ongoing backfill progress of a changefeed. #76995
  • Changefeeds can now be created on tables with more than one column family. Previously, this would error. Now, we create a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table foo has families primary containing the primary key and a string column, and secondary containing a different string column, you'll see two messages for an insert that will look like 0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}. If an update then only affects one family, you'll see only one message (e.g., 0 -> {s2: "newval"}). This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. Avro schema names will include the family name concatenated to the table name. If you don't specify family names in the CREATE or ALTER TABLE statement, the default family names will either be primary or of the form fam_<zero-indexed family id>_<underscore-delimited list of columns>. #77084

SQL language changes

  • Introduced the crdb_internal.transaction_contention_events virtual table, that exposes historical transaction contention events. The events exposed in the new virtual table also include transaction fingerprint IDs for both blocking and waiting transactions. This allows the new virtual table to be joined into statement statistics and transaction statistics tables. The new virtual table requires either the VIEWACTIVITYREDACTED or VIEWACTIVITY role option to access. However, if the user has the VIEWACTIVTYREDACTED role, the contending key will be redacted. The contention events are stored in memory. The number of contention events stored is controlled via sql.contention.event_store.capacity cluster setting. #76917
  • Initial implementation of a scheduled logger used to capture index usage statistics to the telemetry logging channel. #76886
  • Added the ability for the TTL job to generate statistics on number of rows and number of expired rows on the table. This is off by default, controllable by the ttl_row_stats_poll_interval storage parameter. #76837
  • Return ambiguous unary operator error for ambiguous input like ~'1' which can be interpreted as an integer (resulting in -2) or a bit string (resulting in 0). #76943
  • crdb_internal.default_privileges no longer incorrectly shows default privileges for databases where the default privilege was not actually defined. #77255
  • You can now create core changefeeds on tables with more than one column family. CockroachDB creates a feed that will emit individual messages per column family. Primary key columns will appear in the key for all column families, but in the value only in the families they are in. For example, if a table foo has families primary containing the primary key and a string column, and secondary containing a different string column, you'll see two messages for an insert that will look like 0 -> {id: 0, s1: "val1"}, 0 -> {s2: "val2"}. If an update then only affects one family, you'll see only one message (e.g., 0 -> {s2: "newval"}). This behavior reflects CockroachDB internal treatment of column families: writes are processed and stored separately, with only the ordering and atomicity guarantees that would apply to updates to two different tables within a single transaction. #77084
  • A new built-in scalar function crdb_internal.active_version() can now be used alongside crdb_internal.is_at_least_version() to determine which cluster version is currently active and choose client-side feature levels accordingly. #77233
  • IMPORT INTO with AVRO now supports Avro files with the following Avro types: long.time-micros, int.time-millis, long.timestamp-micros,long.timestamp-millis, and int.date. This feature works only if the user has created a CockroachDB table with column types with match certain Avro type: AVRO | CRDB <time-*> | TIME <timestamp_*> | TIMESTAMP <date> | DATE #76989

DB Console changes

  • DB Console now displays locality information in problem ranges and range status. #76892
  • DB Console now displays is_leaseholder and lease_valid information in problem ranges and range status pages. #76892
  • Added the Hot Ranges page and linked to it on the sidebar. #77330
  • Removed stray parenthesis at the end of the duration time for a successful job. #77438

Bug fixes

  • Previously, a bug caused the Open Transaction chart in the Metrics Page to constantly increase for empty transactions. This issue has now been fixed. #77237
  • Previously, draining nodes in a cluster without shutting them down could stall foreground traffic in the cluster. This patch fixes this bug. #77246

Performance improvements

  • Queries of the form SELECT * FROM t1 WHERE filter_expression ORDER BY secondIndexColumn LIMIT n; where there is a NOT NULL CHECK constraint of the form: CHECK (firstIndexColumn IN (const_1, const_2, const_3...) can now be rewritten as a UNION ALL skip scan to avoid the previously-required sort operation. #76893

Contributors

This release includes 108 merged PRs by 51 authors.

v22.1.0-alpha.2

Release Date: March 7, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-alpha.2

Backward-incompatible changes

  • Non-standard cron expressions that specify seconds or year fields are no longer supported. #74881
  • Changefeeds will now filter out virtual computed columns from events by default. #74916
  • The environment variable that controls the max amount of CPU that can be taken by password hash computations during authentication was renamed from COCKROACH_MAX_BCRYPT_CONCURRENCY to COCKROACH_MAX_PW_HASH_COMPUTE_CONCURRENCY. Its semantics remain unchanged. #74301

Security updates

  • CockroachDB is now able to authenticate users via the DB Console and through SQL sessions when the client provides a cleartext password and the stored credentials are encoded using the SCRAM-SHA-256 algorithm. Support for a SCRAM authentication flow is a separate feature and is not the target of this release note. In particular, for SQL client sessions it makes it possible to use the authentication methods password (cleartext passwords), and cert-password (TLS client cert or cleartext password) with either CRDB-BCRYPT or SCRAM-SHA-256 stored credentials. Previously, only CRDB-BCRYPT stored credentials were supported for cleartext password authentication. #74301
  • The hash method used to encode cleartext passwords before storing them is now configurable, via the new cluster setting server.user_login.password_encryption. Its supported values are crdb-bcrypt and scram-sha-256. The cluster setting only is enabled after all cluster nodes have been upgraded, at which point its default value is scram-sha-256. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is set to crdb-bcrypt for backward compatibility. Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via CREATE USER WITH PASSWORD, CREATE ROLE WITH PASSWORD, ALTER USER WITH PASSWORD, or ALTER ROLE WITH PASSWORD. This ensures that the server never sees the cleartext password. #74301
  • The cost of the hashing function for scram-sha-256 is now configurable via the new cluster setting server.user_login.password_hashes.default_cost.scram_sha_256. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute force passwords via repeated login attempts. Future versions of CockroachDB will likely update this default value. #74301
  • When using the default HBA authentication method cert-password for SQL client connections, and the SQL client does not present a TLS client certificate to the server, CockroachDB now automatically upgrades the password handshake protocol to use SCRAM-SHA-256 if the user's stored password uses the SCRAM encoding. The previous behavior of requesting a cleartext password is still used if the stored password is encoded using the CRDB-BCRYPT format. An operator can force clients to always request SCRAM-SHA-256 when a TLS client cert is not provided in order to guarantee the security benefits of SCRAM using the authentication methods cert-scram-sha-256 (either TLS client cert or SCRAM-SHA-256) and scram-sha-256 (only SCRAM-SHA-256). As in previous releases, mandatory cleartext password authentication can be requested (e.g., for debugging purposes) by using the HBA method password. This automatic protocol upgrade can be manually disabled using the new cluster setting server.user_login.cert_password_method.auto_scram_promotion.enable and setting it to false. Disable automatic protocol upgrades if, for example, certain client drivers are found to not support SCRAM-SHA-256 authentication properly. #74301
  • In order to promote a transition to SCRAM-SHA-256 for password authentication, CockroachDB now automatically attempts to convert stored password hashes to SCRAM-SHA-256 after a cleartext password authentication succeeds if the target hash method configured via server.user_login.password_encryption is scram-sha-256. This auto-conversion can happen either during SQL logins or HTTP logins that use passwords, whichever occurs first. When an auto-conversion occurs, a structured event of type password_hash_converted is logged to the SESSIONS channel. The PKBDF2 iteration count on the hash is chosen in order to preserve the latency of client logins, to remain similar to the latency incurred from the starting bcrypt cost. (For example, the default configuration of bcrypt cost 10 is converted to a SCRAM iteration count of 119680.) This choice, however, lowers the cost of brute forcing passwords for an attacker with access to the encoded password hashes, if they have access to ASICs or GPUs, by a factor of ~10. For example, if it would previously cost them $1,000,000 to brute force a crdb-bcrypt hash, it would now cost them "just" $100,000 to brute force the SCRAM-SHA-256 hash that results from this conversion. If an operator wishes to compensate for this, three options are available:
    1. Set up their infrastructure such that only passwords with high entropy can be used. For example, this can be achieved by disabling the ability of end-users to select their own passwords and auto-generating passwords for the user, or enforcing some entropy checks during password selection. This way, the entropy of the password itself compensates for the lower hash complexity.
    2. Manually select a higher SCRAM iteration count. This can be done either by pre-computing SCRAM hashes client-side and providing the pre-computed hash using ALTER USER WITH PASSWORD, or adjusting the cluster setting server.user_login.password_hashes.default_cost.scram_sha_256 and asking CockroachDB to recompute the hash.
    3. Disable the auto-conversion of crdb-bcrypt hashes to scram-sha-256 altogether, using the new cluster setting server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled. This approach is discouraged as it removes the other security protections offered by SCRAM authentication. The conversion also only happens if the target configured method via server.user_login.password_encryption is scram-sha-256, because the goal of the conversion is to move clusters towards using SCRAM. #74301
  • Added support for query cancellation via the pgwire protocol. Since this protocol is unauthenticated, there are a few precautions included.
    1. The protocol requires that a 64-bit key is used to uniquely identify a session. Some of these bits are used to identify the CockroachDB node that owns the session. The rest of the bits are all random. If the node ID is small enough, then only 12 bits are used for the ID, and the remaining 52 bits are random. Otherwise, 32 bits are used for both the ID and the random secret.
    2. A fixed per-node rate limit is used. There can only be at most 256 failed cancellation attempts per second. Any other cancel requests that exceed this rate are ignored. This makes it harder for an attacker to guess random cancellation keys. Specifically, if we assume a 32-bit secret and 256 concurrent sessions on a node, it would take 2^16 seconds (about 18 hours) for an attacker to be certain they have cancelled a query.
    3. No response is returned for a cancel request. This makes it impossible for an attacker to know if their guesses are working. Unsuccessful attempts are logged internally with warnings. Large numbers of these messages could indicate malicious activity. #67501
  • The cluster setting server.user_login.session_revival_token.enabled has been added. It is false by default. If set to true, then a new token-based authentication mechanism is enabled. A token can be generated using the crdb_internal.create_session_revival_token built in function. The token has a lifetime of 10 minutes and is cryptographically signed to prevent spoofing and brute forcing attempts. When initializing a session later, the token can be presented in a pgwire StartupMessage with a parameter name of crdb:session_revival_token_base64, with the value encoded in base64. If this parameter is present, all other authentication checks are disabled, and if the token is valid and has a valid signature, the user who originally generated the token authenticates into a new SQL session. If the token is not valid, then authentication fails. The token does not have use-once semantics, so the same token can be used any number of times to create multiple new SQL sessions within the 10 minute lifetime of the token. As such, the token should be treated as highly sensitive cryptographic information. This feature is meant to be used by multi-tenant deployments to move a SQL session from one node to another. It requires the presence of a valid Ed25519 keypair in tenant-signing.<tenant_id>.crt and tenant-signing.<tenant_id>.key. #75660
  • When the sql.telemetry.query_sampling.enabled cluster setting is enabled, SQL names and client IPs are no longer redacted in telemetry logs. #76676

General changes

  • The following metrics were added for observability of cancellation requests made using the PostgreSQL wire protocol:

    • sql.pgwire_cancel.total
    • sql.pgwire_cancel.ignored
    • sql.pgwire_cancel.successful

    The metrics are all counters. The ignored counter is incremented if a cancel request was ignored due to exceeding the per-node rate limit of cancel requests. #76457

  • Documentation was added describing how jobs and scheduled jobs functions and are used in CockroachDB #73995

Enterprise edition changes

  • Client certificates may now be provided for the webhook changefeed sink. #74645
  • CockroachDB now redacts more potentially sensitive URI elements from changefeed job descriptions. This is a breaking change for workflows that copy URIs. As an alternative, the unredacted URI may be accessed from the jobs table directly. #75174
  • Changefeeds now outputs the topic names created by the Kafka sink. Furthermore, these topic names will be displayed in the SHOW CHANGEFEED JOBS query. #75223
  • Backup and restore jobs now allow encryption/decryption with GCS KMS #75750
  • Kafka sinks support larger messages, up to 2GB in size. #76265
  • Added support for a new SQL statement called ALTER CHANGEFEED, which allows users to add/drop targets for an existing changefeed. The syntax of the statement is: ALTER CHANGEFEED <job_id> {{ADD|DROP} <targets>}...

    There can be an arbitrary number of ADD or DROP commands in any order. For example:

    ALTER CHANGEFEED 123 ADD foo,bar DROP baz;
    

    With this statement, users can avoid going through the process of altering a changefeed on their own, and rely on CockroachDB to carry out this task. #75737

  • Changefeeds running on tables with a low gc.ttlseconds value now function more reliably due to protected timestamps being maintained for the changefeed targets at the resolved timestamp of the changefeed. The frequency at which the protected timestamp is updated to the resolved timestamp can be configured through the changefeed.protect_timestamp_interval cluster setting. If the changefeed lags too far behind such that storage of old data becomes an issue, cancelling the changefeed will release the protected timestamps and allow garbage collection to resume. If protect_data_from_gc_on_pause is unset, pausing the changefeed will release the existing protected timestamp record. #76605

  • Added support to the ALTER CHANGEFEED statement so that users can edit and unset the options of an existing changefeed. The syntax of this addition is the following:

    ALTER CHANGEFEED <job_id> SET <options> UNSET <opt_list>
    

    #76583

  • Users may now alter the sink URI of an existing changefeed. This can be achieved by executing ALTER CHANGEFEED <job_id> SET sink = '<sink_uri>' where the sink type of the new sink must match the sink type of the old sink that was chosen at the creation of the changefeed. #77043

SQL language changes

  • CHECK constraints on the shard column used by hash-sharded indexes are no longer printed in the corresponding SHOW CREATE TABLE. The constraint had been shown because CockroachDB lacked logic to ensure that shard columns which are part of hash-sharded indexes always have the check constraint which the optimizer relies on to achieve properly optimized plans on hash-sharded indexes. The constraint is now implied by the USING HASH clause on the relevant index. #74179
  • The experimental command SCRUB PHYSICAL is no longer implemented. #74761
  • The CREATE MATERIALIZED VIEW statement now supports WITH DATA. #74821
  • CockroachDB now has a crdb_internal.replication_stream_spec function for stream replication. #73886
  • CockroachDB has a new role VIEWACTIVITYREDACTED introduced in v21.2.5 that is similar to VIEWACTIVITY but restricts the use of statement diagnostics bundles. It is possible for a user to have both roles (VIEWACTIVITY and VIEWACTIVITYREDACTED), but VIEWACTIVITYREDACTED takes precedence. #74715
  • In v21.2.5 CockroachDB added support for the ON CONFLICT ON CONSTRAINT form of INSERT ON CONFLICT. This form is added for compatibility with PostgreSQL. It permits explicitly selecting an arbiter index for INSERT ON CONFLICT, rather than inferring one using a column list, which is the default behavior. #73460
  • Imports now check readability earlier for multiple files to fail more quickly if, for example, permissions are invalid. #74863
  • In v21.2.5 CockroachDB added new roles, NOSQLLOGIN and its inverse SQLLOGIN, which controls the SQL login ability for a user while retaining their ability to login to the DB Console (as opposed to NOLOGIN which restricts both SQL and DB Console access). Without any role options all login behavior is permitted. OIDC logins to the DB Console continue to be permitted with NOSQLLOGIN set. #74706
  • Added the default_table_access_method session variable, which only takes in heap, to match the behavior of PostgreSQL. #74774
  • The distributed plan diagram now lists scanned column names for TableReaders. #75114
  • Users can now specify the owner when creating a database, similar to PostgreSQL: CREATE DATABASE name [ [ WITH ] [ OWNER [=] user_name ] #74867
  • The CREATE ROLE and ALTER ROLE statements now accept password hashes computed using the scram-sha-256 method. For example: CREATE USER foo WITH PASSWORD 'SCRAM-SHA-256$4096:B5VaT...'. As for other types of pre-hashed passwords, this auto-detection can be disabled by changing the cluster setting server.user_login.store_client_pre_hashed_passwords.enabled to false. To ascertain whether a scram-sha-256 password hash will be recognized, orchestration code can use the built-in function crdb_internal.check_password_hash_format(). Follow these steps to encode the SCRAM-SHA-256 password:
    1. Get the cleartext password string.
    2. Generate a salt, iteration count, stored key and server key according to RFC 5802.
    3. Encode the hash into a format recognized by CockroachDB: the string SCRAM-SHA-256$, followed by the iteration count, followed by :, followed by the base64-encoded salt, followed by $, followed by the base-64 stored key, followed by :, followed by the base-64 server key. #74301
  • The session variable password_encryption is now exposed to SQL clients. Note that SQL clients cannot modify its value directly, it is only configurable via a cluster setting. #74301
  • When possible, CockroachDB will now automatically require the PostgreSQL-compatible SCRAM-SHA-256 protocol when performing password validation when SQL client login. This mechanism is not used when SQL clients use TLS client certs, which is the recommended approach. This assumes support for SCRAM-SHA-256 in client drivers. As of 2020, SCRAM-SHA-256 is prevalent in the PostgreSQL driver ecosystem. However, users should be mindful of the following possible behavior changes:
    • An application that tries to detect whether password verification has failed by checking server error messages, might observe different error messages with SCRAM-SHA-256. Those checks, if present, need to be updated.
    • If a client driver simply does not support SCRAM-SHA-256 at all, the operator retains the option to set the cluster setting server.user_login.cert_password_method.auto_scram_promotion.enable to false to force the previous password verification method instead. #74301
  • After a cluster upgrade, the first time a SQL client logs in using password authentication, the password will be converted to a new format (scram-sha-256) if it was encoded with crdb-bcrypt previously. This conversion will increase the latency of that initial login by a factor of ~2x, but it will be reduced again after the conversion completes. If login latency is a concern, operators should perform the password conversion ahead of time, by computing new SCRAM hashes for the clients via ALTER USER WITH PASSWORD or ALTER ROLE WITH PASSWORD. This conversion can also be disabled via the new cluster setting server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled. #74301
  • Statements are no longer formatted prior to being sent to the UI, but the new built-in function remains. #75443
  • The default SQL statistics flush interval is now 10 minutes. A new cluster setting sql.stats.aggregatinon.interval controls the aggregation interval of SQL stats, with a default value of 1 hour. #74831
  • SELECT, INSERT, DELETE, and UPDATE can no longer be granted or revoked on databases. Previously SELECT, INSERT, DELETE, and UPDATE would be converted to ALTER DEFAULT PRIVILEGES on GRANTs and were revocable. #72665
  • Added pgcodes to errors when an invalid storage parameter is passed. #75262
  • Implemented the ALTER TABLE ... SET (...) syntax. We do not support any storage parameters yet, so this statement does not change the schema. #75262
  • SHOW GRANTS ON TABLE now includes the is_grantable column #75226
  • Implemented the ALTER TABLE ... RESET (...) syntax. This statement currently does not change the schema. #75429
  • S3 URIs used for BACKUP, EXPORT, or CHANGEFEED can now include the query parameter S3_STORAGE_CLASS to configure the storage class used when that job creates objects in the designated S3 bucket. #75588
  • The cost based optimizer now modifies the query cost based on the avg_size table statistic, which may change query plans. This is controlled by the session variable cost_scans_with_default_col_size, and can be disabled by setting it to true: SET cost_scans_with_default_col_size=true. #74551
  • The crdb_internal.jobs table now has a new column execution_events which is a structured JSON form of execution_errors. #75556
  • The privileges reported in information_schema.schema_privileges for non-user-defined schemas no longer are inferred from the privileges on the parent database. Instead, virtual schemas (like pg_catalog and information_schema) always report the USAGE privilege for the public role. The pg_temp schema always reports USAGE and CREATE privileges for the public role. #75628
  • Transaction ID to transaction fingerprint ID mapping is now stored in the new transaction ID cache, a FIFO unordered in-memory buffer. The size of the buffer is 64 MB by default and configurable via sql.contention.txn_id_cache.max_size cluster setting. Consequentially, two additional metrics are introduced:
    • sql.contention.txn_id_cache.size: the current memory usage of transaction ID cache.
    • sql.contention.txn_id_cache.discarded_count: the number of resolved transaction IDs that are dropped due to memory constraints. #74115
  • Added new built-in functions called crdb_internal.revalidate_unique_constraint, crdb_internal.revalidate_unique_constraints_in_table, and crdb_internal.revalidate_unique_constraints_in_all_tables, which can be used to revalidate existing unique constraints. The different variations support validation of a single constraint, validation of all unique constraints in a table, and validation of all unique constraints in all tables in the current database, respectively. If any constraint fails validation, the functions will return an error with a hint about which data caused the constraint violation. These violations can then be resolved manually by updating or deleting the rows in violation. This will be useful to users who think they may have been affected by issue #73024. #75548
  • The SHOW GRANTS ON SCHEMA statement now includes the is_grantable column #75722
  • CockroachDB now disallows type casts from ENUM to BYTES. #75816
  • EXPORT PARQUET has a new compression option whose value can be gzip or snappy. An example query:

    EXPORT INTO PARQUET 'nodelocal://0/compress_snappy' WITH compression = snappy FROM SELECT * FROM foo
    

    By default, the Parquet file will be uncompressed. With compression, the file name will be <filename>.parquet.gz or <filename>.parquet.snappy. #74661

  • Setting a UTC timezone offset of greater than 167 or less than -167 now returns an error. For example:

    SET TIME ZONE '168'
    

    Gives error:

    invalid value for parameter "timezone": "'168'": cannot find time zone "168": UTC timezone offset is out of range.
    
    SET TIME ZONE '-168'
    

    Gives error:

    invalid value for parameter "timezone": "'-168'": cannot find time zone "-168": UTC timezone offset is out of range.
    

    #75822

  • The RESET ALL statement was added, which resets the values of all session variables to their default values. #75804

  • The SHOW GRANTS ON DATABASE statement now includes the is_grantable column #75854

  • Reordered unimplemented tables in pg_catalog and information_schema to match PostgreSQL. #75461

  • CockroachDB will now remove incompatible database privileges to be consistent with PostgreSQL. Existing SELECT, INSERT, UPDATE, and DELETE privileges on databases will be converted to the equivalent default privileges. #75562

  • CockroachDB now allows users who do not have ADMIN privileges to use SHOW RANGES if the ZONECONFIG privilege is granted to the user. #75551

  • The WITH (param=value) syntax is now allowed for primary key definitions, to align with PostgreSQL and to support WITH (bucket_count=...) syntax for hash-sharded indexes. #75971

  • CockroachDB now aliases the idle_session_timeout session variable with the idle_in_session_timeout variable to align with PostgreSQL. #76002

  • The SHOW GRANTS ON TYPE now includes the is_grantable column #75957

  • The bucket_count storage parameter was added. To create hash-sharded indexes, you can use the new syntax: USING HASH WITH (bucket_count=xxx). The bucket_count storage parameter can only be used with USING HASH. The old WITH BUCKET_COUNT=xxx syntax is still supported for backward compatibility. However, you can only use the old or new syntax, but not both. An error is returned for mixed clauses: USING HASH WITH BUCKET_COUNT=5 WITH (bucket_count=5). #76068

  • The bulkio.backup.merge_file_buffer_size cluster setting default value has been changed from 16MiB to 128MiB. This value determines the maximum byte size of SSTs that we buffer before forcing a flush during a backup. #75988

  • CockroachDB now supports for the bucket_count storage parameter syntax, and should be used over the old WITH BUCKET_COUNT=xxx syntax. With this change, CockroachDB outputs the new syntax in SHOW CREATE statements. #76112

  • CockroachDB now saves statement plan hashes or gists to the Statements persisted stats inside the Statistics column. #75762

  • PostgreSQL error codes were added to the majority of spatial functions. #76129

  • Performing a BACKUP on ranges containing extremely large numbers of revisions to a single row no longer fails with errors related to exceeding the size limit. #76254

  • The default bucket count for hash-sharded index is 16. #76115

  • CockroachDB now filters out internal statements and transactions from UI timeseries metrics. #75815

  • EXPORT PARQUET now supports all data types that Avro changefeeds support. Below are the data type conversions from CockroachDB to Parquet. To maintain backward compatibility with older Parquet readers, Parquet converted types were also annotated. To learn about more about Parquet data representation, see the Parquet docs.

    CockroachDB Type Family -> Parquet Type Parquet Logical Type Parquet Converted Type
    Bool -> boolean nil nil
    String -> byte array string string
    Collated String -> byte array string string
    INet -> byte array string string
    JSON -> byte array json json
    Int (oid.T_int8) -> int64 int64 int64
    Int (oid.T_int4 or oid.T_int2) -> int32 int32 int32
    Float -> float64 nil nil
    Decimal -> byte array decimal (Note: scale and precision data are preserved in the parquet file) decimal
    Uuid -> fixed length byte array (16 bytes) uuid no converted type
    Bytes -> byte array nil nil
    Bit -> byte array nil nil
    Enum -> byte array Enum Enum
    Box2d -> byte array string string
    Geography -> byte array nil nil
    Geometry -> byte array nil nil
    Date -> byte array string string
    Time -> int64 time (note: microseconds since midnight) time
    TimeTz -> byte array string string
    Interval -> byte array string (specifically represented as ISO8601) string
    Timestamp -> byte array string string
    TimestampTz -> byte array string string
    Array -> encoded as a repeated field and each array value gets encoded by pattern described above. List List

    #75890

  • SHOW CREATE TABLE no longer shows the FAMILY clause if there is only the PRIMARY family clause. #76285

  • CockroachDB now records the approximate time when an index was created it. This information is exposed via a new NULL-able TIMESTAMP column, created_at, on crdb_internal.table_indexes. #75753

  • Added support for query cancellation via the pgwire protocol. CockroachDB will now respond to a pgwire cancellation by forwarding the request to the node that is running a particular query. That node will then cancel the query that is currently running in the session identified by the cancel request. The cancel request is made through the pgwire protocol when initializing a new connection. The client must first send 32 bits containing the integer 80877102, followed immediately by the 64-bit BackendKeyData message that the server sent to the client when the session was started. Most PostgreSQL drivers handle this protocol already, so there's nothing for the end-user to do apart from calling the cancel function that their driver offers. See the PostgreSQL docs for more information. #67501

  • Refactored the BACKUP, SHOW BACKUP, and RESTORE incremental_storage option to incremental_location. #76416

  • Restored data now appears to have been written at the time it was restored, rather than the time at which it was backed up, when reading the lower-level write timestamps from the rows themselves. This affects various internal operations and the result of crdb_internal_mvcc_timestamp. #76271

  • The built-in functions crdb_internal.force_panic, crdb_internal.force_log_fatal, crdb_internal.set_vmodule, crdb_internal.get_vmodule are now available to all admin users, not just root. #76518

  • BACKUP of a table marked with exclude_data_from_backup via ALTER TABLE ... SET (exclude_data_from_backup = true) will no longer backup that table's row data. The backup will continue to backup the table's descriptor and related metadata, and so on restore we will end up with an empty version of the backed up table. #75451

  • Failed DROP INDEX schema changes are no longer rolled back. Rolling back a failed DROP INDEX requires the index to be rebuilt, a potentially long-running, expensive operation. Further, in previous versions, such rollbacks were already incomplete as they failed to roll back cascaded drops for dependent views and foreign key constraints. #75727

  • Fixed a bug where when sql.contention.txn_id_cache.max_size was set to 0, it would effectively turn off the transaction ID cache. #76523

  • CockroachDB now allows users to add NEW_KMS encryption keys to existing backups using: ALTER BACKUP <backup_location> ADD NEW_KMS = <kms_uris> WITH OLD_KMS = <kms_uris>; ALTER BACKUP <subdir> IN <backup_collection> ADD NEW_KMS = <kms_uris> WITH OLD_KMS = <kms_uris> The OLD_KMS value must refer to at least one KMS URI that was previously used to encrypt the backup. Following successful completion of the ALTER BACKUP, subsequent backups, restore and show commands can use any of old or new KMS URIs to decrypt the backup. #75900

  • Primary key columns which are not part of a unique secondary index (but are "implicitly" included because all indexes include all primary key columns) are now marked as storing in the information_schema.statistics table and in SHOW INDEX. This is technically more correct; the column is in the value in KV and not in the indexed key. #72670

  • A special flavor of RESTORE, RESTORE SYSTEM USERS FROM ..., was added to support restoring system users from a backup. When executed, the statement recreates those users which are in a backup of system.users but do not currently exist (ignoring those who do) and re-grant roles for users if the backup contains system.role_members. #71542

  • Added support for DECLARE, FETCH, and CLOSE commands for creating, using, and deleting SQL cursors. #74006

  • SQL cursors now appear in pg_catalog.pg_cursors. #74006

  • CockroachDB now turns on support for hash-sharded indexes in implicit partitioned tables. Previously, CockroachDB blocked users from creating hash-sharded indexes in all kinds of partitioned tables including implicit partitioned tables using PARTITION ALL BY or REGIONAL BY ROW. Primary keys cannot be hash-sharded if a table is explicitly partitioned with PARTITION BY or an index cannot be hash-sharded if the index is explicitly partitioned with PARTITION BY. Partitioning columns cannot be placed explicitly as key columns of a hash-sharded index, including regional-by-row table's crdb_region column. #76358

  • When a hash-sharded index is partitioned, ranges are now pre-split within every single possible partition on shard boundaries. Each partition is split up to 16 ranges, otherwise split into the number bucket count ranges. Note that, only the list partition is being pre-split. CockroachDB doesn't pre-split range partitions. #76358

  • New user privileges were added: VIEWCLUSTERSETTING and NOVIEWCLUSTERSETTING that controls whether users can view cluster settings only. #76012

  • Several error cases in geospatial and other built-in functions now return more appropriate error codes. #76458

  • Expression indexes can no longer have duplicate expressions. #76863

  • The crdb_internal.serialize_session and crdb_internal.deserialize_session functions now handle prepared statements. When deserializing, any prepared statements that existed when the session was serialized are re-prepared. Re-preparing a statement if the current session already has a statement with that name throws an error. #76399

  • The experimental_enable_hash_sharded_indexes session variable was removed, along with the corresponding cluster setting. The ability to create hash-sharded indexes is enabled automatically. SQL statements that refer to the setting will still work but will have no effect. #76937

  • Added the session variable default_transaction_quality_of_service which controls the priority of work submitted to the different admission control queues on behalf of SQL requests submitted in a session. Admission control must be enabled for this setting to have an effect. To increase admission control priority of subsequent SQL requests:

    SET default_transaction_quality_of_service=critical;
    

    To decrease admission control priority of subsequent SQL requests:

    SET default_transaction_quality_of_service=background;
    

    To reset admission control priority to the default session setting (in between background and critical):

    SET default_transaction_quality_of_service=regular;
    

    #76512

  • CockroachDB now limits the bucket count in hash-sharded indexes to an inclusive range of [2, 2048]. Previously we only required the bucket count a positive Int32 integer (greater than 1). #77004

  • Added support for distributed import queries in multi-tenant environments, which allows import queries to have improved parallelism by utilizing all available SQL pods in the tenant. #76566

  • The ST_Box2DFromGeoHash function now accepts NULL arguments. If the precision is NULL, it is equivalent to no precision being passed in. Upper-case characters are now parsed as lower-case characters for geohash, matching PostGIS behavior. #76990

  • CockroachDB now supports the SHOW COMPLETIONS AT OFFSET <offset> FOR <stmt> syntax that returns a set of SQL keywords that can complete the keyword at <offset> in the given <stmt>. If the offset is in the middle of a word, then it returns the full word. For example SHOW COMPLETIONS AT OFFSET 1 FOR "SELECT" returns select. #72925

  • A new row level TTL was added to CockroachDB, which is available as a beta feature. This allows users to use a special syntax to automatically mark rows for deletion. Rows are deleted using a SCHEDULED JOB.

    A user can create a table with TTL using:

    CREATE TABLE t (id INT PRIMARY KEY) WITH (ttl_expire_after = '10 mins')
    

    Where ttl_expire_after is a duration expression. A user can also add TTL to an existing table using:

    ALTER TABLE t SET (ttl_expire_after = '10 mins')
    

    This creates a new column, crdb_internal_expiration, which automatically is set to now() + ttl_expire_after when inserted by default or on update. The scheduled job will delete any rows which exceed this timestamp as of the beginning of the job run. The TTL job is configurable in a few ways using the WITH/SET syntax:

    • ttl_select_batch_size: how many rows to select at once (default is cluster setting sql.ttl.default_select_batch_size)
    • ttl_delete_batch_size: how many rows to delete at once (default is cluster setting sql.ttl.default_select_batch_size)
    • ttl_delete_rate_limit: maximum rows to delete per second for the given table (default is cluster setting sql.default.default_delete_rate_limit)
    • ttl_pause: pauses the TTL job (also globally pausable with sql.ttl.job.enabled).

    Using ALTER TABLE table_name RESET (<parameter>) will reset the parameter to re-use the default, or RESET(ttl) will disable the TTL job for the table and remove the crdb_internal_expiration column. #76918

  • Added the cluster setting sql.contention.event_store.capacity. This cluster setting can be used to control the in-memory capacity of the contention event store. When this setting is set to zero, the contention event store is disabled. #76719

  • When dropping a user that has default privileges, the error message now includes which database and schema in which the default privileges are defined. Additionally a hint is given to show exactly how to remove the default privileges. For example:

    pq: role testuser4 cannot be dropped because some objects depend on it owner of default privileges on new sequences belonging to role testuser4 in database testdb2 in schema s privileges for default privileges on new sequences belonging to role testuser3 in database testdb2 in schema s privileges for default privileges on new sequences for all roles in database testdb2 in schema public privileges for default privileges on new sequences for all roles in database testdb2 in schema s HINT: USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser4 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser3; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ROLE testuser3 IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA PUBLIC REVOKE ALL ON SEQUENCES FROM testuser4; USE testdb2; ALTER DEFAULT PRIVILEGES FOR ALL ROLES IN SCHEMA S REVOKE ALL ON SEQUENCES FROM testuser4;
    

    #77016

  • Added support for distributed backups in a multitenant environment that uses all available SQL pods in the tenant. #77023

Operational changes

  • Sending a CockroachDB process, including one running a client command, a SIGUSR2 signal now causes it to open an HTTP port that serve the basic Go performance inspection endpoints for use with pprof. #75678
  • Operators who wish to access HTTP endpoints of the cluster through a proxy can now request specific nodeIDs through a remote_node_id query parameter or cookie with the value set to the nodeID to which they would like to proxy the connection. #72659
  • Added the admission.epoch_lifo.enabled cluster setting, disabled by default, which enables the use of epoch-LIFO adaptive queueing behavior in admission control. #71882
  • Added the cluster setting bulkio.backup.resolve_destination_in_job.enabled which can be used to delay resolution of backup's destination until the job starts running. #76670
  • A server.max_connections cluster setting was added to limit the maximum number of connections to a server. It is disabled by default. #76401
  • BACKUP now resolves incremental backup destinations during the job's execution phase rather than while it is being created to reduce contention on the system.jobs table. The bulkio.backup.resolve_destination_in_job.enabled cluster setting that enabled this functionality in some v21.2 patch releases was removed. #76853
  • Added the cluster setting kv.raft_log.loosely_coupled_truncation.enabled which can be used to disable loosely coupled truncation. #76215
  • RESTORE now runs at a higher parallelism by default to improve performance. #76907
  • Added the admission.epoch_lifo.epoch_duration, admission.epoch_lifo.epoch_closing_delta_duration, admission.epoch_lifo.queue_delay_threshold_to_switch_to_lifo cluster settings for configuring epoch-LIFO queueing in admission control. #76951

Command-line changes

  • Fixed the CLI help text for ALTER DATABASE to show correct options for ADD REGION and DROP REGION, and include some missing options such as CONFIGURE ZONE. #74929
  • If graceful drain range lease transfer encounters issues, verbose logging is now automatically enabled to help with troubleshooting. #68488
  • All cockroach commands now log their stack but do not exit when sent a SIGQUIT signal. This behavior is consistent with the behavior of cockroach start. #75678
  • The debug zip utility now also scrapes the cluster-wide KV replication reports in the output. #75239
  • The flag --self of the cockroach node decommission command is deprecated. Instead, operators should specify the node ID of the target node as an explicit argument. The node that the command is connected to should not be a target node. #74319
  • Added a new optional version argument to the doctor examine command. This can be used to enable or disable validation when examining older ZIP directories. #76166
  • The debug zip CLI command now supports exporting system and crdb_internal tables to a ZIP folder for tenants. #75572
  • Added instructions to an error message when initializing debug tsdump. #75880
  • cockroach sql (and demo) now continue to accept user input when Ctrl+C is pressed at the interactive prompt and the current input line is empty. Previously, it would terminate the shell. To terminate the shell, the client-side command \q is still supported. The user can also terminate the input altogether via EOF (Ctrl+D). The behavior for non-interactive use remains unchanged. #76427
  • The interactive SQL shell (cockroach sql, cockroach demo) now supports interrupting a currently running query with Ctrl+C, without losing access to the shell. #76437
  • Added a new CLI flag --max-tsdb-memory used to set the memory budget for timeseries queries when processing requests from the Metrics page in the DB Console. Most users should not need to change this setting as the default of 1% of system memory or 64 MiB, whichever is greater, is adequate for most deployments. In cases where a deployment of hundreds of nodes has low per-node memory available (for example, below 8 GiB) it may be necessary to increase this value to 2% or higher in order to render time series graphs for the cluster using the DB Console. Otherwise, use the default settings. #74662
  • Node drains now ensure that SQL statistics are not lost during the process, but are now preserved in the statement statistics system table. #76397
  • The CLI now auto completes on tab by using SHOW COMPLETIONS AT OFFSET. #72925

API endpoint changes

  • The /_status/load endpoint, which delivers an instant measurement of CPU load, is now available for regular CockroachDB nodes and not just multitenant SQL-only servers. #75852
  • The StatusClient interface has been extended with a new request called NodesListRequest. This request returns a list of KV nodes for KV servers and SQL nodes for SQL only servers with their corresponding SQL and RPC addresses. #75572
  • Users with the VIEWACTIVITYREDACTED role will not have access to the full queries with constants in the ListSessions response. #76675

DB Console changes

  • Removed $ internal as one of the apps options under the Statements and Transactions page filters. #75470
  • Removed formatting of statements on the Statements, Transactions, and Index details pages. #75443
  • Changed the order of tabs under the SQL Activity page to be Statements, Transactions, and Sessions. #75490
  • The logical plan text is now included in searchable text in the Statements page. #75097
  • If the user has the role VIEWACTIVITYREDACTED, we now hide the Statement Diagnostics bundle info on Statements page (diagnostics column), Statement Details page (diagnostics tab) and Advanced Debug page (diagnostics history). #75274
  • Loading and error pages are now below page config on the Transactions and Statements pages. This was introduced in CockroachDB v21.2.5. #75458
  • Added Circuit Breaker graphs on the Replication Dashboard in the DB Console. This was introduced in CockroachDB v21.2.5. #75613
  • Added an option to cancel a running request for statement diagnostics. #75733
  • DB Console requests can now be routed to arbitrary nodes in the cluster. Users can select a node from a dropdown in the Advanced Debug page of the DB Console to route their UI to that node. Manually initiated requests can either add a remote_node_id query parameter to their request or set a remote_node_id HTTP cookie in order to manage the routing of their request. #72659
  • We no longer show information about aggregation timestamps on the Statements and Statement Details pages, since now all the statement fingerprints are grouped inside the same time selection. #76301
  • Added the status of automatic statistics collection to the Database and Database table pages in the DB Console.
  • Added the timestamp of the last statistics collection to the Database details and Database table pages in the DB Console. #76168
  • Open SQL Transactions and Active SQL Transactions are now downsampled using MAX instead of AVG and will more accurately reflect narrow spikes in transaction counts when looking at downsampled data. #76348
  • Display circuit breakers in problems ranges and range status. #75809
  • A Now button was added to the Statements and Transactions pages. The Reset time link was replaced by the Now button. #76691
  • Changed invalid lease to expired lease on the Problem Ranges section of the Advanced Debug page #76757
  • Added column selector, filters, and new columns to the Sessions and Sessions Details pages. #75965
  • Added long loading messages to the SQL Activity pages. #76739

Bug fixes

  • Fixed possible panics in some distributed queries using ENUMs in join predicates. #74659
  • Fixed a bug that could previously cause redundant lease transfers. #74726
  • Fixed a bug where deleting data in schema changes (for example, when dropping an index or table) could fail with a command too large error. #74674
  • Fixed a bug where CockroachDB could encounter an internal error when performing UPSERT or INSERT ... ON CONFLICT queries in some cases when the new rows contained NULL values (either NULLs explicitly specified or NULLs used since some columns were omitted). #74825
  • Fixed a bug where the scale of a DECIMAL column was not enforced when values specified in scientific notation (for example, 6e3) were inserted into the column. #74869
  • Fixed a bug where certain malformed backup schedule expressions caused the node to crash. #74881
  • Fixed a bug where a RESTORE job could hang if it encountered an error when ingesting restored data. #74905
  • Fixed a bug which caused errors in rare cases when trying to divide INTERVAL values by INT4 or INT2 values. #74882
  • Fixed a bug that could occur when a TIMETZ column was indexed, and a query predicate constrained that column using a < or > operator with a TIMETZ constant. If the column contained values with time zones that did not match the time zone of the TIMETZ constant, it was possible that not all matching values could be returned by the query. Specifically, the results may not have included values within one microsecond of the predicate's absolute time. This bug was introduced when the TIMETZ datatype was first added in v20.1. It exists in all versions of v20.1, v20.2, v21.1, and v21.2 prior to this patch. #74914
  • Fixed an internal error, estimated row count must be non-zero, that could occur during planning for queries over a table with a TIMETZ column. This error was due to a faulty assumption in the statistics estimation code about ordering of TIMETZ values, which has now been fixed. The error could occur when TIMETZ values used in the query had a different time zone offset than the TIMETZ values stored in the table. #74914
  • The --user argument is no longer ignored when using cockroach sql in --insecure mode. #75194
  • Fixed a bug where CockroachDB could incorrectly report the KV bytes read statistic in EXPLAIN ANALYZE output. The bug is present only in v21.2 versions. #75175
  • Fixed a bug that caused internal errors in queries with set operations, like UNION, when corresponding columns on either side of the set operation were not the same. This error only occurred with a limited set of types. This bug is present in v20.2.6+, v21.1.0+, and v21.2.0+. #75219
  • Fixed a bug where CREATE INDEX statements using expressions failed in some cases if they encountered an internal retry. #75056
  • Fixed a bug when creating hash-sharded indexes on existing tables, traffic would hit primarily on the single range of the index before it was split into more ranges for shards as the range size grew. This change makes schema changes able to presplit ranges on shard boundaries before the index becomes writable. Added the sql.hash_sharded_range_pre_split.max cluster setting which allows users to set the upper boundary of the amount of ranges. If the bucket count of the defined index is less than the cluster setting, the bucket count will be the amount of pre-split ranges. #74923
  • Updated the String() function of roleOption to add a space on the role VALID UNTIL. #75271
  • Fixed a bug where SQL Activity pages crashed when a column was sorted the 3rd time. #75473
  • Fixed a bug where if multiple columns were added to a table inside a transaction, then none of the columns would be backfilled if the last column did not require a backfill. #75076
  • Fixed a bug where in some cases queries that involved a scan which returned many results and which included lookups for individual keys were not returning all results from the table. #75475
  • Fixed a bug where dropping and creating a primary index constraint with the same name in a transaction would incorrectly fail. #75155
  • crdb_internal.deserialize_session now checks if the session_user has the privilege to SET ROLE to the current_user before changing the session settings. #75575
  • Dedicated clusters can now restore tables and databases from backups made by tenants. #73647
  • Fixed a bug that caused high SQL tail latencies during background rebalancing in the cluster. #73697
  • Fixed a bug when tables or columns were dropped that contained sequences, where the sequence remained even when the owner table or column did not exist. A sequence is created when a column is defined as a SERIAL type and the serial_normalization session variable is set to sql_sequence. In this case, the sequence is owned by the column and the table where the column exists. The sequence should be dropped when the owner table or column is dropped, which is the PostgreSQL behavior. CockroachDB now assigns correct ownership information to the sequence descriptor and column descriptor so that CockroachDB aligns with PostgreSQL. #74840
  • Fixed a bug where the options query parameter was removed when using the \c command in the SQL shell to reconnect to the cluster. #75673
  • cockroach node decommission no longer causes query failure due to the decommissioning node not closing open SQL connections and still being marked as ready. The decommissioning process now includes a draining step that fixes this. In other words, a decommission now automatically drains a node. This also means that running a drain after a decommission is no longer necessary. It is optional, but recommended, that cockroach node drain is used before cockroach node decommission to avoid the possibility of a disturbance in query performance. #74319
  • The CancelSession endpoint now correctly propagates gateway metadata when forwarding requests. #75814
  • Fixed a bug which could cause nodes to crash when truncating abnormally large Raft logs. #75793
  • Fixed a bug that caused incorrect values to be written to computed columns when their expressions were of the form j->x = y, where j is a JSON column and x and y are constants. This bug also caused corruption of partial indexes with WHERE clauses containing expressions of the same form. This bug was present since version v2.0. #75914
  • Changefeeds retry instead of fail on RPC send failure. #75517
  • Fixed a rare race condition that could lead to client-visible errors like found ABORTED record for implicitly committed transaction. These errors were harmless in that they did not indicate data corruption, but they could be disruptive to clients. #75601
  • Fixed a bug where swapping primary keys could lead to scenarios where foreign key references could lose their uniqueness. #75820
  • Fixed a bug where CASE expressions with branches that result in types that cannot be cast to a common type caused internal errors. They now result in a user-facing error. #76193
  • Fixed a bug that caused internal errors when querying tables with virtual columns in the primary key. This bug was only present since version v22.1.0-alpha.1 and does not appear in any production releases. #75898
  • The DB console Databases page now shows stable, consistent values for database sizes. #76315
  • Fixed a bug where comments were not cleaned up when the table primary keys were swapped, which could cause SHOW TABLE to fail. #76277
  • Fixed a bug where some of the cockroach node subcommands did not handle --timeout properly. #76427
  • Fixed a bug which caused the optimizer to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. #76334
  • Fixed a bug where the list of recently decommissioned nodes and the historical list of decommissioned nodes incorrectly display decommissioned nodes. #76538
  • Fixed a bug where CockroachDB could incorrectly not return a row from a table with multiple column families when that row contains a NULL value when a composite type (FLOAT, DECIMAL, COLLATED STRING, or an array of these types) is included in the PRIMARY KEY. #76563
  • There is now a 1 hour timeout when sending Raft snapshots, to avoid stalled snapshot transfers preventing Raft log truncation and growing the Raft log very large. This is configurable via the COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT environment variable. #76589
  • Fixed an error that could sometimes occur when sorting the output of the SHOW CREATE ALL TABLES statement. #76639
  • Fixed a bug where backups incorrectly backed up database, schema, and type descriptors that were in a DROP state at the time the backup was run. This bug resulted in the user being unable to backup and restore if their cluster had dropped and public descriptors with colliding names. #76635
  • Fixed a race condition that in rare circumstances could cause a node to panic with unexpected Stopped processor during shutdown. #76825
  • Fixed a bug where the different stages of preparing, binding, and executing a prepared statement would use different implicit transactions. Now these stages all share the same implicit transaction. #76792
  • Attempting to run concurrent profiles now works up to a concurrency limit of two. This will remove the occurrence of profile id not found errors while running up to two profiles concurrently. When a profile is not found, the error message has been updated to suggest remediation steps in order to unblock the user. #76266
  • The content type header for the HTTP log sink is now set to application/json if the format of the log output is JSON. #77014
  • Fixed a bug that could corrupt indexes containing virtual columns or expressions. The bug only occurred when the index's table had a foreign key reference to another table with an ON DELETE CASCADE action, and a row was deleted in the referenced table. This bug was present since virtual columns were added in version v21.1.0. #77052
  • Fixed a bug where CockroachDB could crash when running a SQL PREPARE using the PostgreSQL extended protocol. #77063
  • Fixed a bug where running SQL-level EXECUTE using the PostgreSQL extended protocol had inconsistent behavior and could in some cases crash the server. #77063
  • The crdb_internal.node_inflight_trace_spans virtual table will now present traces for all operations ongoing on the respective node. Previously, the table would reflect a small percentage of ongoing operations unless tracing was explicitly enabled. #76403
  • The default value of kv.rangefeed_concurrent_catchup_iterators was lowered to 16 to help avoid overload during CHANGEFEED restarts. #75851

Performance improvements

  • The memory representation of DECIMAL datums has been optimized to save space, avoid heap allocations, and eliminate indirection. This increases the speed of DECIMAL arithmetic and aggregation by up to 20% on large data sets. #74590
  • RESTORE operations in Serverless clusters now explicitly ask the host cluster to distribute data more evenly. #75105
  • IMPORT, CREATE, INDEX, and other bulk ingestion jobs run on Serverless clusters now collaborate with the host cluster to spread ingested data more during ingest. #75105
  • The covar_pop aggregate function is now evaluated more efficiently in a distributed setting. #73062
  • Queries using NOT expr syntax can now be evaluated faster in some cases. #75058
  • The regr_sxx, regr_sxy, regr_syy aggregate functions are now evaluated more efficiently in a distributed setting. #75619
  • Transaction read refresh operations performed during optimistic concurrency control's validation phase now use a time-bound file filter when scanning the LSM tree. This allows these operations to avoid scanning files that contain no keys written since the transaction originally performed its reads. #74628
  • A set of bugs that rendered Queries-Per-Second (QPS) based lease and replica rebalancing in v21.2 and earlier ineffective under heterogenously loaded cluster localities has been fixed. Additionally a limitation which prevent CockroachDB from effectively alleviating extreme QPS hotspots from nodes has also been fixed. #72296
  • The optimizer better optimizes queries that include both foreign key joins and self-joins. #75582
  • A LIMIT can now be pushed below a foreign key join or self-join in more cases, which may result in more efficient query plans. #75582
  • The performance of many DECIMAL arithmetic operators has been improved by as much as 60%. These operators include division (/), sqrt, cbrt, exp, ln, log, and pow. #75770
  • Stores will retry requests that are directed at the incorrect range, most commonly following a recent range split. This patch has the effect of reducing tail latency following range splits. #75446
  • The optimizer can now generate lookup joins in certain cases for non-covering indexes, when performing a left outer/semi/anti join. #58261
  • The optimizer now plans inner lookup joins using expression indexes in more cases, resulting in more efficient query plans. #76078
  • Certain forms of automatically retried read uncertainty errors are now retried more efficiently, avoiding a network round trip. #75905
  • The regr_avgx, regr_avgy, regr_intercept, regr_r2, and regr_slope aggregate functions are now evaluated more efficiently in a distributed setting. #76007
  • IMPORTs and index backfills should now do a better job of spreading their load out over the nodes in the cluster. #75894
  • Fixed a bug in the histogram estimation code that could cause the optimizer to think a scan of a multi-column index would produce 0 rows, when in fact it would produce many rows. This could cause the optimizer to choose a suboptimal plan. It is now less likely for the optimizer to choose a suboptimal plan when multiple multi-column indexes are available. #76486
  • Added the kv.replica_stats.addsst_request_size_factor cluster setting. This setting is used to tune Queries-Per-Second (QPS) sensitivity to large imports. By default, this setting is disabled. When enabled, the size of any AddSSTableRequest will contribute to QPS in inverse relation to this settings magnitude. By default this setting is configured to a conservative 50,000; every 50 kilobytes will be accounted for as an additional 1 QPS. #76252
  • Queries with a LIMIT clause applied against a single table, either explicitly written, or implicit such as in an uncorrelated EXISTS subquery, now scan that table with improved latency if the table is defined with LOCALITY REGIONAL BY ROW and the number of qualified rows residing in the local region is less than or equal to the hard limit (sum of the LIMIT clause and optional OFFSET clause values). This optimization is only applied if the hard limit is 100000 or less. #75431
  • Fixed a limitation where upon adding a new node to the cluster, lease counts among existing nodes could diverge until the new node was fully up-replicated. #74077
  • The optimizer now attempts to plan lookup joins on indexes that include computed columns in more cases, which may improve query plans. #76817
  • The optimizer produces more efficient query plans for INSERT .. ON CONFLICT statements that do not have explicit conflict columns or constraints and are performed on partitioned tables. #76961
  • The corr, covar_samp, sqrdiff, and regr_count aggregate functions are now evaluated more efficiently in a distributed setting #76754
  • The jobs scheduler now runs on a single node by default in order to reduce contention on the scheduled jobs table. #73319

Build changes

Contributors

This release includes 866 merged PRs by 89 authors. We would like to thank the following contributors from the CockroachDB community:

  • Max Neverov
  • RajivTS (first-time contributor)
  • Ulf Adams
  • e-mbrown
  • llllash (first-time contributor)
  • shralex

v22.1.0-alpha.1

Release Date: January 24, 2022

Downloads

Warning:
The CockroachDB executable for Windows is experimental and not suitable for production deployments. Windows 8 or higher is required.

Warning:

Testing releases are intended for testing and experimentation only. Testing releases are not recommended for production use, as they can lead to data corruption, cluster unavailability, performance issues, etc.

Docker image

icon/buttons/copy
$ docker pull cockroachdb/cockroach-unstable:v22.1.0-alpha.1

Backward-incompatible changes

  • Using SESSION_USER in a projection or WHERE clause now returns the SESSION_USER instead of the CURRENT_USER. For backward compatibility, use session_user() for SESSION_USER and current_user() for CURRENT_USER. #70444
  • Placeholder values (e.g., $1) can no longer be used for role names in ALTER ROLE statements or for role names in CREATE ROLE/DROP ROLE statements. #71498

Security updates

  • Authenticated HTTP requests to nodes can now contain additional cookies with the same name as the one CockroachDB uses ("session"). The HTTP spec permits duplicates and will now attempt to parse all cookies with a matching name before giving up. This can resolve issues with running other services on the same domain as your CockroachDB nodes. #70792
  • Added a new flag --external-io-enable-non-admin-implicit-access that can remove the admin-only restriction on interacting with arbitrary network endpoints and using implicit auth in operations such as BACKUP, IMPORT, or EXPORT. #71594
  • When configuring passwords for SQL users, if the client presents the password in cleartext via ALTER/CREATE USER/ROLE WITH PASSWORD, CockroachDB is responsible for hashing this password before storing it. By default, this hashing uses CockroachDB's bespoke crdb-bcrypt algorithm, which is based on the standard bcrypt algorithm. The cost of this hashing function is now configurable via the new cluster setting server.user_login.password_hashes.default_cost.crdb_bcrypt. Its default value is 10, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to brute-force passwords via repeated login attempts. Future versions of CockroachDB will likely update the default accordingly. #74582

General changes

  • Non-cancelable jobs now do not fail unless they fail with a permanent error. They retry with exponential backoff if they fail due to a transient error. Furthermore, jobs that perform reverting tasks do not fail. Instead, they are retried with exponential backoff if an error is encountered while reverting. As a result, transient errors do not impact jobs that are reverting. #69300
  • CockroachDB now supports exporting operation traces to OpenTelemetry-compatible tools using the OTLP protocol through the trace.opentelemetry.collector cluster setting. #65599
  • CockroachDB now supports exporting traces to a Jaeger agent through the new trace.jaeger.agent cluster setting. Exporting to Jaeger was previously possible by configuring the Jaeger agent to accept Zipkin traces and using the trace.zipkin.collector cluster setting; this configuration is no longer required. #65599
  • Support for exporting to Datadog and Lightstep through other interfaces has been retired; these tools can use OpenTelemetry data. The cluster settings trace.lightstep.token, trace.datadog.agent, and trace.datadog.project have been deprecated; they no longer have any effect. #65599
  • Tracing transaction commits now includes details about replication. #72738

Enterprise edition changes

  • Updated retryable error warning message to begin with "WARNING". #70226
  • Temporary tables are now restored to their original database instead of to defaultdb during a full cluster restore. Furthermore, defaultdb and postgres are dropped before a full cluster restore and will only be restored if they are present in the backup being restored. #71890
  • Changefeeds now support GCP Pub/Sub as a sink. #72056

SQL language changes

  • Added new job control statements allowing an operator to manipulate all jobs of a specific type: <Command> ALL <JobType> JOBS. This is supported in CHANGEFEED, BACKUP, IMPORT, and RESTORE jobs. For example: PAUSE ALL CHANGEFEED JOBS. #69314
  • EXPLAIN ANALYZE now contains more information about the MVCC behavior of operators that scan data from disk. #64503
  • Added support for SQL arrays containing JSON for in-memory processing. This does not add support for storing SQL arrays of JSON in tables. #70041
  • Placeholder values can now be used as the right-hand operand of the JSONFetchVal (->) and JSONFetchText (->>) operators without ambiguity. This argument will be given the text type and the "object field lookup" variant of the operator will be used. #70066
  • Fixed createdb and settings columns for pg_catalog tables: pg_user, pg_roles, and pg_authid. #69609
  • The information_schema._pg_truetypid, information_schema._pg_truetypmod, and information_schema._pg_char_max_length built-in functions are now supported to improve compatibility with PostgreSQL. #69913
  • The pg_my_temp_schema built-in function now properly returns the OID of the active session's temporary schema, if one exists. #69909
  • The pg_is_other_temp_schema built-in function is now supported, which returns whether the given OID is the OID of another session's temporary schema. #69909
  • The information_schema._pg_index_position built-in function is now supported, which improves compatibility with PostgreSQL. #69911
  • Extended index scan hints to allow zigzag joins to be forced. #67737
  • pg_authid.rolesuper, pg_roles.rolesuper, and pg_user.usesuper are now true for users/roles that have admin role. #69981
  • Added a warning that sequences are slower than using UUID. #68964
  • SQL queries with ORDER BY x LIMIT k clauses may now be transformed to use TopK sort in the query plan if the limit is a constant. Although this affects the output of EXPLAIN, using TopK in the query plan does not necessarily mean that it is used during execution. #68140
  • The has_tablespace_privilege, has_server_privilege, and has_foreign_data_wrapper_privilege built-in functions now return NULL instead of true when provided with a non-existed OID reference. This matches the behavior of newer PostgreSQL versions. #69939
  • The pg_has_role built-in function is now supported, which returns whether a given user has privileges for a specified role or not. #69939
  • Added the json_populate_record, jsonb_populate_record, json_populate_recordset, and jsonb_populate_recordset functions, which transform JSON into row tuples based on the labels in a record type. #70115
  • The enable_drop_enum_value session variable has been removed, along with the corresponding cluster setting. The functionality of being able to drop enum values is now enabled automatically. Queries that refer to the session/cluster setting will still work but will have no effect. #70369
  • The array built-in functions (array_agg, array_cat, array_position, etc.) now operate on record types. #70332
  • When an invalid cast to OID is made, a pgerror now returns with code 22P02. This previously threw an assertion error. #70454
  • Added the new_db_name option to the RESTORE DATABASE statement, allowing the user to rename the database they intend to restore. #70222
  • Fixed error messaging for built-in functions for sequences. Example: SELECT nextval('@#%@!324234') correctly returns relation "@#%@!324234" does not exist (if the relation doesn't exist) instead of a syntax error. SELECT currval('') returns currval\(\): invalid table name:. #70590
  • It is now possible to cast JSON booleans to the BOOL type, and to cast JSON numerics with fractions to rounded INT types. Error messages are now more clear when a cast from a JSON value to another type fails. #70522
  • Added a new SQL built-in function unordered_unique_rowid, which generates a globally unique 64-bit integer that does not have ordering. #70338
  • Added a new serial_normalization case unordered_rowid, which generates a globally unique 64-bit integer that does not have ordering. #70338
  • A hint is now provided when using a SERIAL4 type that gets upgraded to a SERIAL8 due to the serial_normalization session variable requiring an INT8 to succeed. #70656
  • Improved the error message to identify the column and data type when users try to select a named field from an anonymous record that has no labels. #70726
  • Implemented pg_statistic_ext on pg_catalog. #70591
  • Implemented pg_shadow at pg_catalog. #68255
  • Disallowed cross-database references for sequences by default. This can be enabled with the cluster setting sql.cross_db_sequence_references.enabled. #70581
  • Added the ability to comment on SQL table constraints using PostgreSQL's COMMENT ON CONSTRAINT syntax. #69783
  • Added a WITH COMMENT clause to the SHOW CONSTRAINT statement that causes constraint comments to be displayed. #69783
  • Added empty stubs for tables and columns. Tables: pg_statistic, pg_statistic_ext_data, pg_stats, pg_stats_ext. Columns: pg_attribute.attmissingval. #70865
  • Previously, the behavior of casting an INT to CHAR was similar to BPCHAR where only the first digit of the integer was returned. Now casting INT to CHAR will be interpreted as an ASCII byte, which aligns the overall behavior more with PostgreSQL. #70942
  • A parameter of type CHAR can now be used as a parameter in a prepared statement. #70942
  • The information_schema._pg_numeric_precision, information_schema._pg_numeric_precision_radix, and information_schema._pg_numeric_scale built-in functions are now supported, which improves compatibility with PostgreSQL. #70881
  • If the time zone is set in a GMT offset, for example +7 or -11, the timezone will be formatted as <+07>-07 and <-11>+11 respectively instead of +7, -11. This most notably shows up when doing SHOW TIME ZONE. #70716
  • NULLS FIRST and NULLS LAST specifiers are now supported for ORDER BY. #71083
  • Added SHOW CREATE ALL SCHEMAS to allow the user to retrieve CREATE statements to recreate the schemas of the current database. A flat log of the CREATE statements for schemas is returned. #71138
  • The session variable inject_retry_errors_enabled has been added. When this is true, any statement that is a not a SET statement will return a transaction retry error if it is run inside of an explicit transaction. If the client retries the transaction using the special cockroach_restart SAVEPOINT, then after the third error the transaction will proceed as normal. Otherwise, the errors will continue until inject_retry_errors_enabled is set to false. The purpose of this setting is to allow users to test their transaction retry logic. #71357
  • Arrays of ENUM data types can now be compared. #71427
  • NULLS can be ordered NULLS LAST by default if the null_ordered_last session variable is set to true. #71429
  • Previously, comparing against bytea[] without a cast (e.g., SELECT * FROM t WHERE byteaarrcol = '{}') would result in an ambiguous error. This has now been resolved. #71501
  • Previously, placeholders in an ARRAY (e.g., SELECT ARRAY[$1]::int[]) would resolve in an ambiguous error. This has now been fixed. #71432
  • EXPLAIN output now displays the limit hint when it is nonzero as part of the estimated row count field. #71299
  • Implicit casts performed during INSERT statements now more closely follow PostgreSQL's behavior. Several minor bugs related to these types of casts have been fixed. #70722
  • Newly created tables now have <table_name>_pkey by default as their index/constraint name. #70604
  • A newly created FOREIGN KEY now has the same constraint name as PostgreSQL— <table>_<cols>_fkey. Previously, this was fk_<cols>_ref_<target>. #70658
  • CURRENT_USER and SESSION_USER can now be used as the role identifier in ALTER ROLE statements. #71498
  • Array built-in functions can now be used with arrays of ENUM. #71482
  • Introduced an implicitly defined type for every table, which resolves to a TUPLE type that contains all of the columns in the table. #70100
  • The WITH RECURSIVE variant that uses UNION (as opposed to UNION ALL) is now supported. #71685
  • Infinite decimal values can now be encoded when sending data to/from the client. The encoding matches the PostgreSQL encoding. #71772
  • Previously, certain ENUM built-in functions or operators required an explicit ENUM cast. This has been reduced in some cases. #71653
  • Removed the cluster setting sql.defaults.interleaved_tables.enabled as interleaved support is now fully removed. #71537
  • T_unknown ParameterTypeOIDs in the PostgreSQL frontend/backend protocol are now correctly handled. #71971
  • String literals can now be parsed as tuples, either in a cast expression, or in other contexts like function arguments. #71916
  • Added the function crdb_internal.reset_index_usage_stats() to clear index usage stats. This can be invoked from the SQL shell. #71896
  • Custom session options can now be used, i.e., any session variable that has . in the name. #71915
  • Added logic to process an EXPORT PARQUET statement. #71868
  • Added ability to EXPORT PARQUET for relations with FLOAT, INT, and STRING column types. #71868
  • This change removes support for: IMPORT TABLE ... CREATE USING and IMPORT TABLE ... <non-bundle-format> DATA. <non-bundle-format> refers to CSV, Delimited, PGCOPY, AVRO. These formats do not define the table schema in the same file as the data. The workaround following this feature removal is to use CREATE TABLE with the same schema that was previously being passed into the IMPORT statement, followed by an IMPORT INTO the newly created table. #71058
  • Previously, running COMMENT ON CONSTRAINT on a table in a schema would succeed but the comment would not actually be created. Now the comment is successfully created. #71985
  • INTERLEAVE IN PARENT is permanently removed from CockroachDB. #70618
  • EXPLAIN ANALYZE now shows maximum allocated memory and maximum SQL temp disk usage for a statement. #72113
  • Added SHOW CREATE ALL TYPES to allow the user to retrieve the statements to recreate user-defined types of the current database. It returns a flat log of the CREATE statements for types. #71326
  • It is now possible to swap names (for tables, etc.) in the same transaction. For example:

    CREATE TABLE foo();
    BEGIN;
    ALTER TABLE foo RENAME TO bar;
    CREATE TABLE foo();
    COMMIT;
    

    Previously, the user would receive a "relation ... already exists" error. #70334

  • To align with PostgreSQL, casting an OID type with a value of 0 to a regtype, regproc, regclass, or regnamespace now will convert the value to the string -. The reverse behavior is implemented too, so a - will become 0 if casted to a reg OID type. #71873

  • Implemented the date_part built-in function for better compatibility with PostgreSQL. #72502

  • PRIMARY KEYs have been renamed to conform to PostgreSQL (e.g., @tbl_col1_col2_pkey) in this release. To protect certain use cases of backward compatibility, we also allow @primary index hints to alias to the PRIMARY KEY, but only if no other index is named primary. #72534

  • Some filesystem-level properties are now exposed in crdb_internal.kv_store_status. Note that the particular fields and layout are not stabilized yet. #72435

  • Introduced a built-in function crdb_internal.init_stream and a cluster setting stream_replication.job_liveness_timeout. #72330

  • A notice is now issued when creating a foreign key referencing a column of a different width. #72545

  • Newly created databases will now have the CONNECT privilege granted by default to the PUBLIC role. #72595

  • SQL Stats metrics with *_internal suffix in their labels are now removed. #72667

  • system.table_statistics has an additional field, avgSize, that is the average size in bytes of the column(s) with columnIDs. The new field is visible with the command SHOW STATISTICS FOR TABLE, as with other table statistics. This field is not yet used by the optimizer as part of cost modeling. #72365

  • Added the modifier IF NOT EXISTS to ALTER TABLE ... ADD CONSTRAINT IF NOT EXISTS. #71257

  • Fixed gateway_region built-in for --multitenant demo clusters. #72734

  • Prior to this change it was possible to alter a column's type in a way that was not compatible with the DEFAULT or ON UPDATE clause. This would cause parsing errors within tables. Now the DEFAULT or ON UPDATE clause is checked. #71423

  • Added CREATE SEQUENCE AS <typename> option. #57339

  • Introduced new SQL syntax ALTER RANGE RELOCATE to move a lease or replica between stores. This is helpful in an emergency situation to relocate data in the cluster. #72305

  • EXPORT PARQUET can now export relations with NULL values to Parquet files. #72530

  • Previously, ALTER TABLE ... RENAME TO ... would allow the user to move the table from a database to another if the table is being moved within one database's public schema to another. This is now disallowed. #72000

  • ALTER DATABASE CONVERT TO SCHEMA is now disabled in v22.1 and later. #72000

  • It is now possible to specify a different path for incremental backups. #72713

  • If the WITH GRANT OPTION flag is present when granting privileges to a user, then that user is able to grant those same privileges to subsequent users; otherwise, they cannot. If the GRANT OPTION FOR flag is present when revoking privileges from a user, then only the ability to grant those privileges is revoked from that user, not the privileges themselves (otherwise both the privileges and the ability to grant those privileges are revoked). This behavior is consistent with PostgreSQL. #72123

  • Disallowed ST_MakePolygon making empty polygons from empty LINESTRING. This is not allowed in PostGIS. #73489

  • EXPORT PARQUET now preserves column names and nullability. #73382

  • Previously, the output from SHOW CREATE VIEW returned on a single line. The format has now been improved to be more readable. #73642

  • The output of the EXPLAIN SQL statement has changed. Below the plan, index recommendations are now outputted for the SQL statement in question, if there are any. These index recommendations are indexes the user could add or indexes they could replace to make the given query faster. #73302

  • The VOID type is now recognized. #73488

  • In the experimental RELOCATE syntax forms, the positional keyword that indicates that the statement should move non-voter replicas is now spelled NONVOTERS, instead of NON_VOTERS. #73803

  • The inline help for the ALTER statements now mentions the RELOCATE syntax. #73803

  • The experimental ALTER RANGE...RELOCATE syntax now accepts arbitrary scalar expressions as the source and target store IDs. #73807

  • The output of EXPLAIN ALTER RANGE ... RELOCATE now includes the source and target store IDs. #73807

  • The experimental ALTER RANGE...RELOCATE syntax now accepts arbitrary scalar expressions as the range ID when the FOR clause is not used. #73807

  • The output of EXPLAIN ALTER RANGE ... RELOCATE now includes which replicas are subject to the relocation. #73807

  • ALTER DEFAULT PRIVILEGES IN SCHEMA <schemas...> is now supported. As well as specifying default privileges globally (within a database), users can now specify default privileges in a specific schema. When creating an object that has default privileges specified at the database (global) and at the schema level, the union of the default privileges is taken. #73576

  • Index recommendations can be omitted from the EXPLAIN plan if the index_recommendations_enabled session variable is set to false. #73346

  • The output of EXPLAIN ALTER INDEX/TABLE ... RELOCATE/SPLIT now includes the target table/index name and, for the SPLIT AT variants, the expiry timestamp. #73832

  • Added the digest and hmac built-in functions. They match the PostgreSQL (pgcrypto) implementation. Supported hash algorithms are md5, sha1, sha224, sha256, sha384, and sha512. #73935

  • Users can now RESTORE (locality-aware) incremental backups created with the incremental_storage parameter. #73744

  • Improved cost model for TopK expressions if the input to TopK can be partially ordered by its sort columns. #73459

  • Added the incremental_storage option to SHOW BACKUP so users can now observe incremental backups. #73357

  • Previously, escape character processing (\) was missing from constraint span generation, which resulted in incorrect results when doing escaped lookups. This is now fixed. #73978

  • The shard column of a hash-sharded index is now a virtual column and not a stored computed column. #74138

  • Clients waiting for a schema change job will now receive an error if the job they are waiting for is paused. #74157

  • The GRANT privilege is deprecated in v22.1 and will be removed in v22.2 in favor of grant options. To promote backward compatibility for users with code still using GRANT, we will give grant options on every privilege a user has when they are granted GRANT and remove all their grant options when GRANT is revoked, in addition to the existing grant option behavior. #74210

  • system.protected_timestamp_records table now has an additional target column that will store an encoded protocol buffer that represents the target a record protects. This target can either be the entire cluster, tenants, or schema objects (databases/tables). #74281

  • The KV tracing of SQL queries (that could be obtained with \set auto_trace=on,kv) has been adjusted slightly. Previously, CockroachDB would fully decode the key in each key-value pair, even if some part of the key would not be decoded while tracing was enabled. Now, CockroachDB does not perform any extra decoding, and parts of the key that are not decoded are replaced with ?. #74236

  • CockroachDB now supports default_with_oids, which only accepts a false value. #74499

  • EXPORT PARQUET can export columns of type array #73735

  • Statements are now formatted prior to being sent to the DB Console. This is done using a new built-in function that formats statements. #73853

Operational changes

  • cockroach debug zip now includes the raw system.settings table. This table makes it possible to determine whether a cluster setting has been explicitly set. #70498
  • The meaning of sql.distsql.max_running_flows cluster setting has been extended so that when the value is negative, it will be multiplied by the number of CPUs on the node to get the maximum number of concurrent remote flows on the node. The default value is -128, meaning that a 4-CPU machine will have up to 512 concurrent remote DistSQL flows, but a 8-CPU machine will have up to 1024. The previous default was 500. #71787
  • Some existing settings related to BACKUP execution are now listed by SHOW CLUSTER SETTING. #71962
  • The cluster settings affecting the admission control system enablement are now set to defaults that enable admission control. #68535
  • The default value of the kv.rangefeed.catchup_scan_iterator_optimization.enabled cluster setting is now true. #73473
  • Added a metric addsstable.aswrites that tracks the number of AddSSTable requests ingested as regular write batches. #73910
  • Added a metric replica.uninitialized that tracks the number of Uninitialized replicas in a store. #73975

Command-line changes

  • cockroach demo will now begin processing scheduled jobs after 15 seconds, instead of the 2–5 minutes in a production environment. #70242
  • The 25 max QPS rate limit for workloads on cockroach demo can now be configured with a --workload-max-qps flag. #70642
  • The SQL shell now supports the \du USER command to show information for the current user. #70609
  • Added support for a CLI shortcut that displays constraint information similar to PostgreSQL. The shortcut is \dd TABLE. #69783
  • Added a --read-only flag to cockroach sql which will set the default_session_read_only variable upon connecting. This is effectively equivalent to the PGTARGETSESSIONATTRS=read-only option added to libpq and psql in PostgreSQL 13. #71003
  • Previously, cockroach debug merge-logs output was prefixed by a short machine name by default, which made it difficult to identify the originating node when looking at the merged results. CockroachDB now supports "${fpath}" in the --prefix argument. #71254
  • Added an option in the cockroach demo movr command to populate the user_promo_code table. #61531
  • Allowed demoing of CockroachDB's multi-tenant features via the --multitenant flag to cockroach demo. #71026
  • cockroach demo now runs by default in multi-tenant mode. #71988
  • Added buffering to log sinks. This can be configured with the new "buffering" field on any log sink provided via the --log or --log-config-file flags. #70330
  • The server identifiers (cluster ID, node ID, tenant ID, instance ID) are no longer duplicated at the start of every new log file (during log file rotations). They are now only logged when known during server start-up. (The copy of the identifiers is still included in per-event envelopes for the various json output logging formats.) #73306
  • The cockroach node drain command is now able to drain a node by ID, specified on the command line, from another node in the cluster. It now also supports the flag --self for symmetry with node decommission. Using node drain without either --self or a node ID is now deprecated. #73991
  • The deprecated command cockroach quit now accepts the flags --self and the ability to specify a node ID like cockroach node drain. Even though the command is deprecated, this change was performed to ensure symmetry in the documentation until the command is effectively removed. #73991
  • Not finding the right certificates in the certs directory, or not specifying a certs directory or certificate path, will now fall back on checking server CA using Go's TLS code to find the certificates in the OS trust store. If no matching certificate is found, then an x509 error will occur announcing that the certificate is signed by an unknown authority. #73776

API endpoint changes

  • CREATE CHANGEFEED on a cloud storage sink now allows a new query parameter to specify how the file paths are partitioned. For example, partition_format="daily" represents the default behavior of splitting into dates (2021-05-01/). While partition_format="hourly" will further partition them by hour (2021-05-01/05/). partition_format="flat" will not partition at all. #70207
  • OpenID Connect (OIDC) support for DB Console is no longer marked as experimental. #71183
  • Added new API endpoint for getting a table's index statistics. #72660
  • Added a new batch RPC, and batch method counters are now visible in DB Console and _status/vars. #72767

DB Console changes

  • Fixed drag to zoom on custom charts. #70229
  • Fixed drag to time range for a specific window issue. #70326
  • Added pre-sizing calculation for Metrics page graphs. #70838
  • The /debug/pprof/goroutineui/ page has a new and improved look. #71690
  • The all nodes report now notifies a user if they need more privileges to view the page's information. #71960
  • The Advanced Debug page now contains an additional link under the Metrics header called Rules. This endpoint exposes Prometheus-compatible alerting and aggregation rules for CockroachDB metrics. #72677
  • Added an Index Stats table and a button to clear index usage stats on the Table Details page for each table. #72948
  • Added the ability to remove the dashed underline from sorted table headers for headers with no tooltips. Removed the dashed underline from the Index Stats table headers. #73455
  • Added a new Index Details page, which exists for each index on a table. #73178
  • Updated the Reset Index Stats button text to be more clear. #73700
  • The time pickers on the Statements and Transactions pages now have the same style and functionality as the time picker on the Metrics page. #73608
  • The clear SQL stats links on the Statements and Transactions pages were relabeled reset SQL stats, for consistency with the language in the SQL shell. #73922
  • Added the ability to create conditional statement diagnostics by adding two new fields: 1) minimum execution latency, which specifies the limit for when a statement should be tracked, and 2) expiry time, which specifies when a diagnostics request should expire. #74112
  • The Terminate Session and Terminate Query buttons are again available to be enabled on the Sessions Page. #74408
  • Added formatting to statements on the Statements, Transactions, and Index Details pages. #73853
  • Updated colors for Succeeded badges and the progress bar on the Jobs page. #73924

Bug fixes

  • Fixed a bug where CURRENT_USER and SESSION_USER were parsed incorrectly. #70439
  • Fixed a bug where index/partition subzones may not have inherited the global_reads field correctly in some cases from their parent. #69983
  • Previously, DROP DATABASE CASCADE could fail while resolving a schema in certain scenarios with the following error: ERROR: error resolving referenced table ID <ID>: descriptor is being dropped. This is now fixed. #69789
  • Backfills will now always respect the most up-to-date value of changefeed.backfill.concurrent_scan_requests even during an ongoing backfill. #69933
  • The cockroach debug merge-logs command no longer returns an error when the log decoder attempts to parse older logs. #68282
  • The PostgreSQL-compatible "Access Privilege Inquiry Functions" (e.g., has_foo_privilege) were incorrectly returning whether all comma-separated privileges were held, instead of whether any of the provided privileges were held. This incompatibility has been resolved. #69939
  • Queries involving arrays of tuples will no longer spuriously fail due to an encoding error. #63996
  • cockroach sql -e (and demo -e) can now process all client-side commands, not just \echo, \set, and a few others. #70671
  • cockroach sql --set=auto_trace=on -e 'select ...' (and the similar demo command) now produces an execution trace properly. #70671
  • Previously, bulk INSERT/UPDATE in implicit transactions retried indefinitely if the statement exceeded the default leasing deadline of 5 minutes. Now, if the leasing deadline is exceeded this will be raised back up to the SQL layer to refresh the deadline before trying to commit. #69936
  • IMPORT now respects the spatial index storage options specified in PGDUMP files on indexes it creates. #66903
  • Fixed IMPORT in tpcc workload. #71013
  • Some query patterns that previously could cause a single node to become a hotspot have been fixed so that the load is evenly distributed across the whole cluster. #70648
  • Fixed a bug where the 2-parameter setval built-in function previously caused the sequence to increment incorrectly one extra time. For a sequence to increment, use setval(seq, val, true). #71643
  • Previously, the effects of the setval and nextval built-in functions would be rolled back if the surrounding transaction was rolled back. This was not correct, as setval is not supposed to respect transaction boundaries. This is now fixed. #71643
  • In v21.2, jobs that fail to revert are retried unconditionally, but with exponential backoff. In the mixed-version state there is no exponential backoff, so it would not be good to retry unconditionally. The behavior has been changed such that before v21.2 is finalized, these jobs will enter the revert-failed state as in v21.1. #71780
  • Fixed a bug that prevented rollback of ALTER PRIMARY KEY when the old primary key was interleaved. #71780
  • Previously, adding new values to a user-defined ENUM type would cause a prepared statement using that type to not work. This now works as expected. #71632
  • Previously, when records and ENUM types containing escape sequences were shown in the CLI, they would be incorrectly double-escaped. This is now fixed. #71916
  • SCHEMA CHANGE and SCHEMA CHANGE GC jobs following a DROP ... CASCADE now have sensible names, instead of '' and 'GC for ', respectively. #70630
  • Fixed a race condition that could have caused core changefeeds whose targeted table became invalid to not explain why when shutting down. #72490
  • cockroach demo can now be launched with --global and --multitenant=true options. #72750
  • Y-axis labels on custom charts no longer display undefined. #73055
  • Raft snapshots now detect timeouts earlier and avoid spamming the logs with context deadline exceeded errors. #73279
  • Error messages produced during import are now truncated. Previously, IMPORT could potentially generate large error messages that could not be persisted to the jobs table, resulting in a failed import never entering the failed state and instead retrying repeatedly. #73303
  • Servers no longer crash due to panics in HTTP handlers. #72395
  • crdb_internal.table_indexes now shows if an index is sharded or not. #73380
  • Previously, creating indexes with special characters would fail to identify indexes with the same matching name, which caused an internal error. This is now fixed. #73367
  • CockroachDB now prohibits mixed dimension LINESTRING in ST_MakePolygon. #73489
  • Index CREATE statements in the pg_indexes table now shows a hash-sharding bucket count if an index is hash sharded. Column direction is removed from gin index in pg_indexes. #73491
  • Uninitialized replicas that are abandoned after an unsuccessful snapshot no longer perform periodic background work, so they no longer have a non-negligible cost. #73362
  • Fixed a bug that caused incorrect evaluation of placeholder values in EXECUTE statements. The bug presented when the PREPARE statement cast a placeholder value, e.g., PREPARE s AS SELECT $1::INT2. If the assigned value for $1 exceeded the maximum width value of the cast target type, the result value of the cast could be incorrect. This bug had been present since v19.1 or earlier. #73762
  • Previously, during RESTORE system.namespace entry wouldn't be inserted for synthetic public schemas. This is now fixed. #73875
  • Fixed a bug that caused internal errors when altering the primary key of a table. The bug was only present if the table had a partial index with a predicate that referenced a virtual computed column. This bug was present since virtual computed columns were added in v21.1.0. #74102
  • Foreign keys referencing a hash-sharded key will not fail anymore. #74140
  • Raft snapshots no longer risk starvation under very high concurrency. Before this fix, it was possible that many of Raft snapshots could be starved and prevented from succeeding due to timeouts, which were accompanied by errors like error rate limiting bulk io write: context deadline exceeded. #73288
  • Portals in the extended protocol of the PostgreSQL wire protocol can now be used from implicit transactions and can be executed multiple times if there is a row-count limit applied to the portal. Previously, trying to execute the same portal twice would result in an unknown portal error. #74242
  • Fixed a bug that incorrectly allowed creating computed column expressions, expression indexes, and partial index predicate expressions with mutable casts between STRING types and the types REGCLASS, REGNAMESPACE, REGPROC, REGPROCEDURE, REGROLE, and REGTYPE. Creating such computed columns, expression indexes, and partial indexes is now prohibited. Any tables with these types of expressions may be corrupt and should be dropped and recreated. #74286
  • Fixed a bug that, in very rare cases, could result in a node terminating with a fatal error: unable to remove placeholder: corrupted replicasByKey map. To avoid potential data corruption, users affected by this crash should not restart the node, but instead decommission it in absentia and have it rejoin the cluster under a new nodeID. #73734
  • Previously, when foreign keys were included inside an ADD COLUMN statement and multiple columns were added in a single statement then the first added column would have the foreign key applied (or an error generated based on the wrong column). This is now fixed. #74411
  • Previously, a double-nested ENUM in a DistSQL query would not get hydrated on remote nodes resulting in panic. This is now fixed. #74189
  • Fixed a panic when attempting to access the hottest ranges (e.g., via the /_status/hotranges endpoint) before initial statistics had been gathered. #74507
  • Previously, setting sslmode=require would check for local certificates, so omitting a certs path would cause an error even though require does not verify server certificates. This has been fixed by bypassing certificate path checking for sslmode=require. This bug had been present since v21.2.0. #73776
  • Previously, incorrect results would be returned, or internal errors, on queries with window functions returning INT, FLOAT, BYTES, STRING, UUID, or JSON type when the disk spilling occurred. The bug was introduced in v21.2.0 and is now fixed. #74491
  • Previously, MIN/MAX could be incorrectly calculated when used as window functions in some cases after spilling to disk. The bug was introduced in v21.2.0 and is now fixed. #74491
  • Previously, IMPORT TABLE ... PGDUMP with a COPY FROM statement in the dump file that has less target columns than the CREATE TABLE schema definition would result in a nil pointer exception. This is now fixed. #74601

Performance improvements

  • Mutation statements with a RETURNING clause that are not inside an explicit transaction are faster in some cases. #70200
  • Added collection of basic table statistics during an import, to help the optimizer until full statistics collection completes. #67106
  • The accuracy of histogram calculations for BYTES types has been improved. As a result, the optimizer should generate more efficient query plans in some cases. #68740
  • A SELECT query with both MIN(LeadingIndexColumn) and MAX(LeadingIndexColumn) can now be performed with two LIMITED SCANs instead of a single FULL SCAN. #70496
  • A SELECT query from a single table with more than one MIN or MAX scalar aggregate expression and a WHERE clause can now be performed with LIMITED SCANs, one per aggregate expression, instead of a single FULL SCAN. Note: No other aggregate function, such as SUM, may be present in the query block in order for it to be eligible for this transformation. This optimization should occur when each MIN or MAX expression involves a leading index column, so that a sort is not required for the limit operation, and the resulting query plan will appear cheapest to the optimizer. #70854
  • Queries with many ORed WHERE clause predicates previously took an excessive amount of time for the optimizer to process, especially if the predicates involved index columns, and if there were more than 1000 predicates (which could happen with application-generated SQL). To fix this, the processing of SQL with many ORed predicates have been optimized to make sure a query plan can be generated in seconds instead of minutes or hours. #71247
  • Creating many schema changes in parallel now runs faster due to improved concurrency notifying the jobs subsystem. #71909
  • The sqlinstance subsystem no longer reads from the backing SQL table for every request for SQL instance details. This will result in improved performance for supporting multi-region setup for the multi-tenant architecture. #69976
  • Improved efficiency of looking up old historical descriptors. #71239
  • Improved performance of some GROUP BY queries with a LIMIT if there is an index ordering that matches a subset of the grouping columns. In this case the total number of aggregations needed to satisfy the LIMIT can be emitted without scanning the entire input, enabling the execution to be more effective. #71546
  • var_pop and stddev_pop aggregate functions are now evaluated more efficiently in a distributed setting. #73712
  • Improved job performance in the face of concurrent schema changes by reducing contention. #72297
  • Incremental backups now use less memory to verify coverage of prior backups. #74393
  • CockroachDB now retrieves the password credentials of the SQL client concurrently without waiting for the password response during the authentication exchange. This can yield a small latency reduction in new SQL connections. #74365
  • CockroachDB now allows rangefeed streams to use separate http connection when kv.rangefeed.use_dedicated_connection_class.enabled setting is turned on. Using separate connection class reduces the possibility of OOMs when running rangefeeds against very large tables. The connection window size for rangefeeds can be adjusted via COCKROACH_RANGEFEED_INITIAL_WINDOW_SIZE environment variable, whose default is 128KB. #74222
  • The merging of incremental backup layers during RESTORE now uses a simpler and less memory-intensive algorithm. #74394
  • The default snapshot recovery/rebalance rates kv.snapshot_rebalance.max_rate and kv.snapshot_recovery.max_rate were bumped from 8MB/s to 32MB/s. Production experience has taught us that earlier values were too conservative. Users might observe higher network utilization during rebalancing/recovery in service of rebalancing/recovering faster (for the latter, possibly reducing the MTTF). If the extra utilization is undesirable, users can manually revert these rates back to their original settings of 8 MB/s. #71814

Build changes

  • Upgraded to new version of Go v1.17. #69603

Miscellaneous

Docker

  • Env variables and init scripts in docker-entrypoint-initdb.d for the start-single-node command are now supported. #70238

Contributors

This release includes 1720 merged PRs by 132 authors.

We would like to thank the following contributors from the CockroachDB community:

  • Catherine J (first-time contributor)
  • Eudald (first-time contributor)
  • Ganeshprasad Biradar
  • Josh Soref (first-time contributor)
  • Max Neverov
  • Miguel Novelo (first-time contributor)
  • Paul Lin (first-time contributor)
  • Remy Wang (first-time contributor)
  • Rupesh Harode
  • TennyZhuang (first-time contributor)
  • Tharun
  • Ulf Adams
  • Zhou Fang (first-time contributor)
  • lpessoa (first-time contributor)
  • mnovelodou (first-time contributor)
  • neeral
  • shralex (first-time contributor)
  • tukeJonny (first-time contributor)

Yes No

Yes No