This page provides solutions for common performance issues in your clusters. See the recipes to identify performance problems in your workload.
Indicators that your workload is experiencing contention
- Your application is experiencing degraded performance with serialization errors like
- The SQL Statement Contention graph graph is showing spikes over time.
- The Transaction Restarts graph graph is showing spikes in retries over time.
Fix contention problems
Find the transactions and statements within the transactions that are experiencing contention. CockroachDB has several ways of tracking down transactions that are experiencing contention:
- The Transactions page and the Statements page in the DB Console allow you to sort by contention.
crdb_internal.cluster_contended_tablestables for your database to find the indexes and tables that are experiencing contention.
SELECT * FROM movr.crdb_internal.cluster_contended_indexes; SELECT * FROM movr.crdb_internal.cluster_contended_tables;
After identifying the transactions or statements that are causing contention, follow the steps outlined in our best practices recommendations to avoid contention.
Indicators that your workload has statements with full table scans
The following query returns statements with full table scans in their statement plan:
SHOW FULL TABLE SCANS;
The following query against the
CRDB_INTERNAL.node_statement_statisticstable returns results:
SELECT count(*) as total_full_scans FROM crdb_internal.node_statement_statistics WHERE FullTableScan = 'True';
Viewing the statement plan on the Statement details page of the DB Console indicates that the plan contains full table scans.
The Full Table/Index Scans graph in the DB Console is showing spikes over time.
Fix full table scans in statements
Full table scans often result in poor statement performance. Not every full table scan is an indicator of poor performance, however. The cost-based optimizer may decide on a full table scan when other index or join scans would result in longer execution time.
Indicators that your tables are using suboptimal primary keys
- The Hardware metrics dashboard in the DB Console shows high resource usage per node.
- The Problem Ranges report on the Advanced Debug page of the DB Console indicates a high number of queries per second on a subset of ranges or nodes.
Fix suboptimal primary keys
Evaluate the schema of your table to see if you can redistribute data more evenly across multiple ranges. Specifically, make sure you have followed best practices when selecting your primary key.
If your workload with a small dataset (for example, a dataset that contains few index key values) is experiencing resource contention, consider splitting your tables and indexes to distribute ranges across multiple nodes to reduce resource contention.
Indicators that your tables are experiencing slow writes
If the Overview dashboard in the DB Console shows high service latency when the QPS of
UPDATE statements is high, your tables are experiencing slow writes.
Fix slow writes
Secondary indexes can improve read workload performance. However, there is overhead in maintaining secondary indexes that can affect your write performance. You should profile your tables periodically to determine whether an index is worth the overhead. To identify infrequently accessed indexes that could be candidates to drop, query the
crdb_internal.index_usage_statistics table :
SELECT ti.descriptor_name as table_name, ti.index_name, total_reads, last_read FROM crdb_internal.index_usage_statistics AS us JOIN crdb_internal.table_indexes ti ON us.index_id = ti.index_id AND us.table_id = ti.descriptor_id ORDER BY total_reads ASC;
Use the values in the
last_read columns to identify indexes that have low usage or are stale and could be dropped.