Cluster Settings

Warning:
CockroachDB v22.1 is no longer supported. For more details, see the Release Support Policy.

Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning.

They can be updated anytime after a cluster has been started, but only by a member of the admin role, to which the root user belongs by default.

Note:

In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the cockroach start command when starting a node and cannot be changed without stopping and restarting the node. For more details, see Start a Node.

Settings

Warning:

These cluster settings have a broad impact on CockroachDB internals and affect all applications, workloads, and users running on a CockroachDB cluster. For some settings, a session setting could be a more appropriate scope.

SettingTypeDefaultDescription
admission.epoch_lifo.enabledbooleanfalsewhen true, epoch-LIFO behavior is enabled when there is significant delay in admission
admission.epoch_lifo.epoch_closing_delta_durationduration5msthe delta duration before closing an epoch, for epoch-LIFO admission control ordering
admission.epoch_lifo.epoch_durationduration100msthe duration of an epoch, for epoch-LIFO admission control ordering
admission.epoch_lifo.queue_delay_threshold_to_switch_to_lifoduration105msthe queue delay encountered by a (tenant,priority) for switching to epoch-LIFO ordering
admission.kv.enabledbooleantruewhen true, work performed by the KV layer is subject to admission control
admission.kv.stores.tenant_weights.enabledbooleanfalsewhen true, tenant weights are enabled for KV-stores admission control
admission.kv.tenant_weights.enabledbooleanfalsewhen true, tenant weights are enabled for KV admission control
admission.sql_kv_response.enabledbooleantruewhen true, work performed by the SQL layer when receiving a KV response is subject to admission control
admission.sql_sql_response.enabledbooleantruewhen true, work performed by the SQL layer when receiving a DistSQL response is subject to admission control
bulkio.backup.deprecated_full_backup_with_subdir.enabledbooleanfalsewhen true, a backup command with a user specified subdirectory will create a full backup at the subdirectory if no backup already exists at that subdirectory.
bulkio.backup.file_sizebyte size128 MiBtarget size for individual data files produced during BACKUP
bulkio.backup.read_timeoutduration5m0samount of time after which a read attempt is considered timed out, which causes the backup to fail
bulkio.backup.read_with_priority_afterduration1m0samount of time since the read-as-of time above which a BACKUP should use priority when retrying reads
bulkio.stream_ingestion.minimum_flush_intervalduration5sthe minimum timestamp between flushes; flushes may still occur if internal buffers fill up
changefeed.node_throttle_configstringspecifies node level throttling configuration for all changefeeeds
cloudstorage.azure.concurrent_upload_buffersinteger1controls the number of concurrent buffers that will be used by the Azure client when uploading chunks.Each buffer can buffer up to cloudstorage.write_chunk.size of memory during an upload
cloudstorage.http.custom_castringcustom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage
cloudstorage.timeoutduration10m0sthe timeout for import/export storage operations
cluster.organizationstringorganization name
cluster.preserve_downgrade_optionstringdisable (automatic or manual) cluster version upgrade from the specified version until reset
diagnostics.active_query_dumps.enabledbooleantrueexperimental: enable dumping of anonymized active queries to disk when node is under memory pressure
diagnostics.forced_sql_stat_reset.intervalduration2h0m0sinterval after which the reported SQL Stats are reset even if not collected by telemetry reporter. It has a max value of 24H.
diagnostics.reporting.enabledbooleantrueenable reporting diagnostic metrics to cockroach labs
diagnostics.reporting.intervalduration1h0m0sinterval at which diagnostics data should be reported
enterprise.licensestringthe encoded cluster license
external.graphite.endpointstringif nonempty, push server metrics to the Graphite or Carbon server at the specified host:port
external.graphite.intervalduration10sthe interval at which metrics are pushed to Graphite (if enabled)
feature.backup.enabledbooleantrueset to true to enable backups, false to disable; default is true
feature.changefeed.enabledbooleantrueset to true to enable changefeeds, false to disable; default is true
feature.export.enabledbooleantrueset to true to enable exports, false to disable; default is true
feature.import.enabledbooleantrueset to true to enable imports, false to disable; default is true
feature.restore.enabledbooleantrueset to true to enable restore, false to disable; default is true
feature.schema_change.enabledbooleantrueset to true to enable schema changes, false to disable; default is true
feature.stats.enabledbooleantrueset to true to enable CREATE STATISTICS/ANALYZE, false to disable; default is true
jobs.retention_timeduration336h0m0sthe amount of time to retain records for completed jobs before
kv.allocator.load_based_lease_rebalancing.enabledbooleantrueset to enable rebalancing of range leases based on load and latency
kv.allocator.load_based_rebalancingenumerationleases and replicaswhether to rebalance based on the distribution of QPS across stores [off = 0, leases = 1, leases and replicas = 2]
kv.allocator.load_based_rebalancing_intervalduration1m0sthe rough interval at which each store will check for load-based lease / replica rebalancing opportunities
kv.allocator.qps_rebalance_thresholdfloat0.25minimum fraction away from the mean a store's QPS (such as queries per second) can be before it is considered overfull or underfull
kv.allocator.range_rebalance_thresholdfloat0.05minimum fraction away from the mean a store's range count can be before it is considered overfull or underfull
kv.bulk_io_write.max_ratebyte size1.0 TiBthe rate limit (bytes/sec) to use for writes to disk on behalf of bulk io ops
kv.bulk_sst.max_allowed_overagebyte size64 MiBif positive, allowed size in excess of target size for SSTs from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.bulk_sst.target_sizebyte size16 MiBtarget size for SSTs emitted from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.closed_timestamp.follower_reads_enabledbooleantrueallow (all) replicas to serve consistent historical reads based on closed timestamp information
kv.protectedts.reconciliation.intervalduration5m0sthe frequency for reconciling jobs with protected timestamp records
kv.range_split.by_load_enabledbooleantrueallow automatic splits of ranges based on where load is concentrated
kv.range_split.load_qps_thresholdinteger2500the QPS over which, the range becomes a candidate for load based splitting
kv.rangefeed.enabledbooleanfalseif set, rangefeed registration is enabled
kv.replica_circuit_breaker.slow_replication_thresholdduration1m0sduration after which slow proposals trip the per-Replica circuit breaker (zero duration disables breakers)
kv.replica_stats.addsst_request_size_factorinteger50000the divisor that is applied to addsstable request sizes, then recorded in a leaseholders QPS; 0 means all requests are treated as cost 1
kv.replication_reports.intervalduration1m0sthe frequency for generating the replication_constraint_stats, replication_stats_report and replication_critical_localities reports (set to 0 to disable)
kv.snapshot_rebalance.max_ratebyte size32 MiBthe rate limit (bytes/sec) to use for rebalance and upreplication snapshots
kv.snapshot_recovery.max_ratebyte size32 MiBthe rate limit (bytes/sec) to use for recovery snapshots
kv.transaction.max_intents_bytesinteger4194304maximum number of bytes used to track locks in transactions
kv.transaction.max_refresh_spans_bytesinteger256000maximum number of bytes used to track refresh spans in serializable transactions
kv.transaction.reject_over_max_intents_budget.enabledbooleanfalseif set, transactions that exceed their lock tracking budget (kv.transaction.max_intents_bytes) are rejected instead of having their lock spans imprecisely compressed
schedules.backup.gc_protection.enabledbooleanfalseenable chaining of GC protection across backups run as part of a schedule; default is false
security.ocsp.modeenumerationoffuse OCSP to check whether TLS certificates are revoked. If the OCSP server is unreachable, in strict mode all certificates will be rejected and in lax mode all certificates will be accepted. [off = 0, lax = 1, strict = 2]
security.ocsp.timeoutduration3stimeout before considering the OCSP server unreachable
server.auth_log.sql_connections.enabledbooleanfalseif set, log SQL client connect and disconnect events (note: may hinder performance on loaded nodes)
server.auth_log.sql_sessions.enabledbooleanfalseif set, log SQL session login/disconnection events (note: may hinder performance on loaded nodes)
server.authentication_cache.enabledbooleantrueenables a cache used during authentication to avoid lookups to system tables when retrieving per-user authentication-related information
server.child_metrics.enabledbooleanfalseenables the exporting of child metrics, additional prometheus time series with extra labels
server.clock.forward_jump_check_enabledbooleanfalseif enabled, forward clock jumps > max_offset/2 will cause a panic
server.clock.persist_upper_bound_intervalduration0sthe interval between persisting the wall time upper bound of the clock. The clock does not generate a wall time greater than the persisted timestamp and will panic if it sees a wall time greater than this value. When cockroach starts, it waits for the wall time to catch-up till this persisted timestamp. This guarantees monotonic wall time across server restarts. Not setting this or setting a value of 0 disables this feature.
server.consistency_check.max_ratebyte size8.0 MiBthe rate limit (bytes/sec) to use for consistency checks; used in conjunction with server.consistency_check.interval to control the frequency of consistency checks. Note that setting this too high can negatively impact performance.
server.eventlog.enabledbooleantrueif set, logged notable events are also stored in the table system.eventlog
server.eventlog.ttlduration2160h0m0sif nonzero, entries in system.eventlog older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.
server.host_based_authentication.configurationstringhost-based authentication configuration to use during connection authentication
server.hsts.enabledbooleanfalseif true, HSTS headers will be sent along with all HTTP requests. The headers will contain a max-age setting of one year. Browsers honoring the header will always use HTTPS to access the DB Console. Ensure that TLS is correctly configured prior to enabling.
server.identity_map.configurationstringsystem-identity to database-username mappings
server.max_connections_per_gatewayinteger-1the maximum number of non-superuser SQL connections per gateway allowed at a given time (note: this will only limit future connection attempts and will not affect already established connections). Negative values result in unlimited number of connections. Superusers are not affected by this limit.
server.oidc_authentication.autologinbooleanfalseif true, logged-out visitors to the DB Console will be automatically redirected to the OIDC login endpoint
server.oidc_authentication.button_textstringLogin with your OIDC providertext to show on button on DB Console login page to login with your OIDC provider (only shown if OIDC is enabled)
server.oidc_authentication.claim_json_keystringsets JSON key of principal to extract from payload after OIDC authentication completes (usually email or sid)
server.oidc_authentication.client_idstringsets OIDC client id
server.oidc_authentication.client_secretstringsets OIDC client secret
server.oidc_authentication.enabledbooleanfalseenables or disabled OIDC login for the DB Console
server.oidc_authentication.principal_regexstring(.+)regular expression to apply to extracted principal (see claim_json_key setting) to translate to SQL user (golang regex format, must include 1 grouping to extract)
server.oidc_authentication.provider_urlstringsets OIDC provider URL ({provider_url}/.well-known/openid-configuration must resolve)
server.oidc_authentication.redirect_urlstringhttps://localhost:8080/oidc/v1/callbacksets OIDC redirect URL via a URL string or a JSON string containing a required `redirect_urls` key with an object that maps from region keys to URL strings (URLs should point to your load balancer and must route to the path /oidc/v1/callback)
server.oidc_authentication.scopesstringopenidsets OIDC scopes to include with authentication request (space delimited list of strings, required to start with `openid`)
server.rangelog.ttlduration720h0m0sif nonzero, range log entries older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.
server.shutdown.connection_waitduration0sthe maximum amount of time a server waits for all SQL connections to be closed before proceeding with a drain. (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.drain_waitduration0sthe amount of time a server waits in an unready state before proceeding with a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting. --drain-wait is to specify the duration of the whole draining process, while server.shutdown.drain_wait is to set the wait time for health probes to notice that the node is not ready.)
server.shutdown.lease_transfer_waitduration5sthe timeout for a single iteration of the range lease transfer phase of draining (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.query_waitduration10sthe timeout for waiting for active queries to finish during a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.time_until_store_deadduration5m0sthe time after which if there is no new gossiped information about a store, it is considered dead
server.user_login.cert_password_method.auto_scram_promotion.enabledbooleantruewhether to automatically promote cert-password authentication to use SCRAM
server.user_login.min_password_lengthinteger1the minimum length accepted for passwords set in cleartext via SQL. Note that a value lower than 1 is ignored: passwords cannot be empty in any case.
server.user_login.password_encryptionenumerationcrdb-bcryptwhich hash method to use to encode cleartext passwords passed via ALTER/CREATE USER/ROLE WITH PASSWORD [crdb-bcrypt = 2, scram-sha-256 = 3]
server.user_login.password_hashes.default_cost.crdb_bcryptinteger10the hashing cost to use when storing passwords supplied as cleartext by SQL clients with the hashing method crdb-bcrypt (allowed range: 4-31)
server.user_login.password_hashes.default_cost.scram_sha_256integer119680the hashing cost to use when storing passwords supplied as cleartext by SQL clients with the hashing method scram-sha-256 (allowed range: 4096-240000000000)
server.user_login.timeoutduration10stimeout after which client authentication times out if some system range is unavailable (0 = no timeout)
server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabledbooleanfalsewhether to automatically re-encode stored passwords using crdb-bcrypt to scram-sha-256
server.web_session.auto_logout.timeoutduration168h0m0sthe duration that web sessions will survive before being periodically purged, since they were last used
server.web_session.purge.max_deletions_per_cycleinteger10the maximum number of old sessions to delete for each purge
server.web_session.purge.periodduration1h0m0sthe time until old sessions are deleted
server.web_session.purge.ttlduration1h0m0sif nonzero, entries in system.web_sessions older than this duration are periodically purged
server.web_session_timeoutduration168h0m0sthe duration that a newly created web session will be valid
sql.auth.resolve_membership_single_scan.enabledbooleantruedetermines whether to populate the role membership cache with a single scan
sql.contention.event_store.capacitybyte size64 MiBthe in-memory storage capacity per-node of contention event store
sql.contention.event_store.duration_thresholdduration0sminimum contention duration to cause the contention events to be collected into crdb_internal.transaction_contention_events
sql.contention.txn_id_cache.max_sizebyte size64 MiBthe maximum byte size TxnID cache will use (set to 0 to disable)
sql.cross_db_fks.enabledbooleanfalseif true, creating foreign key references across databases is allowed
sql.cross_db_sequence_owners.enabledbooleanfalseif true, creating sequences owned by tables from other databases is allowed
sql.cross_db_sequence_references.enabledbooleanfalseif true, sequences referenced by tables from other databases are allowed
sql.cross_db_views.enabledbooleanfalseif true, creating views that refer to other databases is allowed
sql.defaults.cost_scans_with_default_col_size.enabledbooleanfalsesetting to true uses the same size for all columns to compute scan cost
sql.defaults.datestyleenumerationiso, mdydefault value for DateStyle session setting [iso, mdy = 0, iso, dmy = 1, iso, ymd = 2]
sql.defaults.default_hash_sharded_index_bucket_countinteger16used as bucket count if bucket count is not specified in hash sharded index definition
sql.defaults.default_int_sizeinteger8the size, in bytes, of an INT type
sql.defaults.disallow_full_table_scans.enabledbooleanfalsesetting to true rejects queries that have planned a full table scan
sql.defaults.distsqlenumerationautodefault distributed SQL execution mode [off = 0, auto = 1, on = 2, always = 3]
sql.defaults.experimental_alter_column_type.enabledbooleanfalsedefault value for experimental_alter_column_type session setting; enables the use of ALTER COLUMN TYPE for general conversions
sql.defaults.experimental_auto_rehoming.enabledbooleanfalsedefault value for experimental_enable_auto_rehoming; allows for rows in REGIONAL BY ROW tables to be auto-rehomed on UPDATE
sql.defaults.experimental_distsql_planningenumerationoffdefault experimental_distsql_planning mode; enables experimental opt-driven DistSQL planning [off = 0, on = 1]
sql.defaults.experimental_enable_unique_without_index_constraints.enabledbooleanfalsedefault value for experimental_enable_unique_without_index_constraints session setting;disables unique without index constraints by default
sql.defaults.experimental_implicit_column_partitioning.enabledbooleanfalsedefault value for experimental_enable_temp_tables; allows for the use of implicit column partitioning
sql.defaults.experimental_stream_replication.enabledbooleanfalsedefault value for experimental_stream_replication session setting;enables the ability to setup a replication stream
sql.defaults.experimental_temporary_tables.enabledbooleanfalsedefault value for experimental_enable_temp_tables; allows for use of temporary tables by default
sql.defaults.foreign_key_cascades_limitinteger10000default value for foreign_key_cascades_limit session setting; limits the number of cascading operations that run as part of a single query
sql.defaults.idle_in_session_timeoutduration0sdefault value for the idle_in_session_timeout; default value for the idle_in_session_timeout session setting; controls the duration a session is permitted to idle before the session is terminated; if set to 0, there is no timeout
sql.defaults.idle_in_transaction_session_timeoutduration0sdefault value for the idle_in_transaction_session_timeout; controls the duration a session is permitted to idle in a transaction before the session is terminated; if set to 0, there is no timeout
sql.defaults.implicit_select_for_update.enabledbooleantruedefault value for enable_implicit_select_for_update session setting; enables FOR UPDATE locking during the row-fetch phase of mutation statements
sql.defaults.insert_fast_path.enabledbooleantruedefault value for enable_insert_fast_path session setting; enables a specialized insert path
sql.defaults.intervalstyleenumerationpostgresdefault value for IntervalStyle session setting [postgres = 0, iso_8601 = 1, sql_standard = 2]
sql.defaults.large_full_scan_rowsfloat1000default value for large_full_scan_rows session setting which determines the maximum table size allowed for a full scan when disallow_full_table_scans is set to true
sql.defaults.locality_optimized_partitioned_index_scan.enabledbooleantruedefault value for locality_optimized_partitioned_index_scan session setting; enables searching for rows in the current region before searching remote regions
sql.defaults.lock_timeoutduration0sdefault value for the lock_timeout; default value for the lock_timeout session setting; controls the duration a query is permitted to wait while attempting to acquire a lock on a key or while blocking on an existing lock in order to perform a non-locking read on a key; if set to 0, there is no timeout
sql.defaults.on_update_rehome_row.enabledbooleantruedefault value for on_update_rehome_row; enables ON UPDATE rehome_row() expressions to trigger on updates
sql.defaults.optimizer_use_histograms.enabledbooleantruedefault value for optimizer_use_histograms session setting; enables usage of histograms in the optimizer by default
sql.defaults.optimizer_use_multicol_stats.enabledbooleantruedefault value for optimizer_use_multicol_stats session setting; enables usage of multi-column stats in the optimizer by default
sql.defaults.override_alter_primary_region_in_super_region.enabledbooleanfalsedefault value for override_alter_primary_region_in_super_region; allows for altering the primary region even if the primary region is a member of a super region
sql.defaults.override_multi_region_zone_config.enabledbooleanfalsedefault value for override_multi_region_zone_config; allows for overriding the zone configs of a multi-region table or database
sql.defaults.prefer_lookup_joins_for_fks.enabledbooleanfalsedefault value for prefer_lookup_joins_for_fks session setting; causes foreign key operations to use lookup joins when possible
sql.defaults.primary_regionstringif not empty, all databases created without a PRIMARY REGION will implicitly have the given PRIMARY REGION
sql.defaults.reorder_joins_limitinteger8default number of joins to reorder
sql.defaults.require_explicit_primary_keys.enabledbooleanfalsedefault value for requiring explicit primary keys in CREATE TABLE statements
sql.defaults.results_buffer.sizebyte size16 KiBdefault size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can be overridden on an individual connection with the 'results_buffer_size' parameter. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Updating the setting only affects new connections. Setting to 0 disables any buffering.
sql.defaults.serial_normalizationenumerationrowiddefault handling of SERIAL in table definitions [rowid = 0, virtual_sequence = 1, sql_sequence = 2, sql_sequence_cached = 3, unordered_rowid = 4]
sql.defaults.statement_timeoutduration0sdefault value for the statement_timeout; default value for the statement_timeout session setting; controls the duration a query is permitted to run before it is canceled; if set to 0, there is no timeout
sql.defaults.stub_catalog_tables.enabledbooleantruedefault value for stub_catalog_tables session setting
sql.defaults.super_regions.enabledbooleanfalsedefault value for enable_super_regions; allows for the usage of super regions
sql.defaults.transaction_rows_read_errinteger0the limit for the number of rows read by a SQL transaction which - once exceeded - will fail the transaction (or will trigger a logging event to SQL_INTERNAL_PERF for internal transactions); use 0 to disable
sql.defaults.transaction_rows_read_loginteger0the threshold for the number of rows read by a SQL transaction which - once exceeded - will trigger a logging event to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions); use 0 to disable
sql.defaults.transaction_rows_written_errinteger0the limit for the number of rows written by a SQL transaction which - once exceeded - will fail the transaction (or will trigger a logging event to SQL_INTERNAL_PERF for internal transactions); use 0 to disable
sql.defaults.transaction_rows_written_loginteger0the threshold for the number of rows written by a SQL transaction which - once exceeded - will trigger a logging event to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions); use 0 to disable
sql.defaults.use_declarative_schema_changerenumerationondefault value for use_declarative_schema_changer session setting;disables new schema changer by default [off = 0, on = 1, unsafe = 2, unsafe_always = 3]
sql.defaults.vectorizeenumerationondefault vectorize mode [on = 0, on = 2, experimental_always = 3, off = 4]
sql.defaults.zigzag_join.enabledbooleantruedefault value for enable_zigzag_join session setting; allows use of zig-zag join by default
sql.distsql.max_running_flowsinteger-128the value - when positive - used as is, or the value - when negative - multiplied by the number of CPUs on a node, to determine the maximum number of concurrent remote flows that can be run on the node
sql.distsql.temp_storage.workmembyte size64 MiBmaximum amount of memory in bytes a processor can use before falling back to temp storage
sql.guardrails.max_row_size_errbyte size512 MiBmaximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an error is returned; use 0 to disable
sql.guardrails.max_row_size_logbyte size64 MiBmaximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an event is logged to SQL_PERF (or SQL_INTERNAL_PERF if the mutating statement was internal); use 0 to disable
sql.hash_sharded_range_pre_split.maxinteger16max pre-split ranges to have when adding hash sharded index to an existing table
sql.log.slow_query.experimental_full_table_scans.enabledbooleanfalsewhen set to true, statements that perform a full table/index scan will be logged to the slow query log even if they do not meet the latency threshold. Must have the slow query log enabled for this setting to have any effect.
sql.log.slow_query.internal_queries.enabledbooleanfalsewhen set to true, internal queries which exceed the slow query log threshold are logged to a separate log. Must have the slow query log enabled for this setting to have any effect.
sql.log.slow_query.latency_thresholdduration0swhen set to non-zero, log statements whose service latency exceeds the threshold to a secondary logger on each node
sql.metrics.index_usage_stats.enabledbooleantruecollect per index usage statistics
sql.metrics.max_mem_reported_stmt_fingerprintsinteger100000the maximum number of reported statement fingerprints stored in memory
sql.metrics.max_mem_reported_txn_fingerprintsinteger100000the maximum number of reported transaction fingerprints stored in memory
sql.metrics.max_mem_stmt_fingerprintsinteger100000the maximum number of statement fingerprints stored in memory
sql.metrics.max_mem_txn_fingerprintsinteger100000the maximum number of transaction fingerprints stored in memory
sql.metrics.statement_details.dump_to_logsbooleanfalsedump collected statement statistics to node logs when periodically cleared
sql.metrics.statement_details.enabledbooleantruecollect per-statement query statistics
sql.metrics.statement_details.gateway_node.enabledbooleantruesave the gateway node for each statement fingerprint. If false, the value will be stored as 0.
sql.metrics.statement_details.plan_collection.enabledbooleanfalseperiodically save a logical plan for each fingerprint
sql.metrics.statement_details.plan_collection.periodduration5m0sthe time until a new logical plan is collected
sql.metrics.statement_details.thresholdduration0sminimum execution time to cause statement statistics to be collected. If configured, no transaction stats are collected.
sql.metrics.transaction_details.enabledbooleantruecollect per-application transaction statistics
sql.multiple_modifications_of_table.enabledbooleanfalseif true, allow statements containing multiple INSERT ON CONFLICT, UPSERT, UPDATE, or DELETE subqueries modifying the same table, at the risk of data corruption if the same row is modified multiple times by a single statement (multiple INSERT subqueries without ON CONFLICT cannot cause corruption and are always allowed)
sql.multiregion.drop_primary_region.enabledbooleantrueallows dropping the PRIMARY REGION of a database if it is the last region
sql.notices.enabledbooleantrueenable notices in the server/client protocol being sent
sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabledbooleanfalseif enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability
sql.spatial.experimental_box2d_comparison_operators.enabledbooleanfalseenables the use of certain experimental box2d comparison operators
sql.stats.automatic_collection.enabledbooleantrueautomatic statistics collection mode
sql.stats.automatic_collection.fraction_stale_rowsfloat0.2target fraction of stale rows per table that will trigger a statistics refresh
sql.stats.automatic_collection.min_stale_rowsinteger500target minimum number of stale rows per table that will trigger a statistics refresh
sql.stats.cleanup.recurrencestring@hourlycron-tab recurrence for SQL Stats cleanup job
sql.stats.flush.enabledbooleantrueif set, SQL execution statistics are periodically flushed to disk
sql.stats.flush.intervalduration10m0sthe interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to 1 hour
sql.stats.histogram_collection.enabledbooleantruehistogram collection mode
sql.stats.multi_column_collection.enabledbooleantruemulti-column statistics collection mode
sql.stats.persisted_rows.maxinteger1000000maximum number of rows of statement and transaction statistics that will be persisted in the system tables
sql.stats.post_events.enabledbooleanfalseif set, an event is logged for every CREATE STATISTICS job
sql.stats.response.maxinteger20000the maximum number of statements and transaction stats returned in a CombinedStatements request
sql.stats.response.show_internal.enabledbooleanfalsecontrols if statistics for internal executions should be returned by the CombinedStatements endpoint. This endpoint is used to display statistics on the Statement and Transaction fingerprint pages under SQL Activity
sql.telemetry.query_sampling.enabledbooleanfalsewhen set to true, executed queries will emit an event on the telemetry logging channel
sql.temp_object_cleaner.cleanup_intervalduration30m0show often to clean up orphaned temporary objects
sql.temp_object_cleaner.wait_intervalduration30m0show long after creation a temporary object will be cleaned up
sql.trace.log_statement_executebooleanfalseset to true to enable logging of executed statements
sql.trace.session_eventlog.enabledbooleanfalseset to true to enable session tracing; note that enabling this may have a negative performance impact
sql.trace.stmt.enable_thresholdduration0senables tracing on all statements; statements executing for longer than this duration will have their trace logged (set to 0 to disable); note that enabling this may have a negative performance impact; this setting applies to individual statements within a transaction and is therefore finer-grained than sql.trace.txn.enable_threshold
sql.trace.txn.enable_thresholdduration0senables tracing on all transactions; transactions open for longer than this duration will have their trace logged (set to 0 to disable); note that enabling this may have a negative performance impact; this setting is coarser-grained than sql.trace.stmt.enable_threshold because it applies to all statements within a transaction as well as client communication (e.g. retries)
sql.ttl.default_delete_batch_sizeinteger100default amount of rows to delete in a single query during a TTL job
sql.ttl.default_delete_rate_limitinteger0default delete rate limit for all TTL jobs. Use 0 to signify no rate limit.
sql.ttl.default_range_concurrencyinteger1default amount of ranges to process at once during a TTL delete
sql.ttl.default_select_batch_sizeinteger500default amount of rows to select in a single query during a TTL job
sql.ttl.job.enabledbooleantruewhether the TTL job is enabled
timeseries.storage.enabledbooleantrueif set, periodic timeseries data is stored within the cluster; disabling is not recommended unless you are storing the data elsewhere
timeseries.storage.resolution_10s.ttlduration240h0m0sthe maximum age of time series data stored at the 10 second resolution. Data older than this is subject to rollup and deletion.
timeseries.storage.resolution_30m.ttlduration2160h0m0sthe maximum age of time series data stored at the 30 minute resolution. Data older than this is subject to deletion.
trace.debug.enablebooleanfalseif set, traces for recent requests can be seen at https:///debug/requests
trace.jaeger.agentstringthe address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as :. If no port is specified, 6381 will be used.
trace.opentelemetry.collectorstringaddress of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as :. If no port is specified, 4317 will be used.
trace.span_registry.enabledbooleantrueif set, ongoing traces can be seen at https:///#/debug/tracez
trace.zipkin.collectorstringthe address of a Zipkin instance to receive traces, as :. If no port is specified, 9411 will be used.
versionversion22.1set the active cluster version in the format '.'

View current cluster settings

Use the SHOW CLUSTER SETTING statement.

Change a cluster setting

Use the SET CLUSTER SETTING statement.

Before changing a cluster setting, please note the following:

  • Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster.

  • Do not change cluster settings while upgrading to a new version of CockroachDB. Wait until all nodes have been upgraded before you make the change.

See also


Yes No