This page has instructions for handling errors and troubleshooting problems that may arise during application development.
Troubleshoot query problems
If you are not satisfied with your SQL query performance, follow the instructions in Optimize Statement Performance Overview to be sure you are avoiding common performance problems like full table scans, missing indexes, etc.
If you have already optimized your SQL queries as described in Optimize Statement Performance Overview and are still having issues such as:
- Hanging or "stuck" queries, usually due to contention with long-running transactions
- Queries that are slow some of the time (but not always)
- Low throughput of queries
Take a look at Troubleshoot SQL Behavior.
If you aren't sure whether SQL query performance needs to be improved on your cluster, see Identify slow statements.
Transaction retry errors
Messages with the error code
40001 and the string
restart transaction are known as transaction retry errors. These indicate that a transaction failed due to contention with another concurrent or recent transaction attempting to write to the same data. The transaction needs to be retried by the client.
In most cases, the correct actions to take when encountering transaction retry errors are:
Update your application to support client-side retry handling when transaction retry errors are encountered. Follow the guidance for the specific error type.
Take steps to minimize transaction retry errors in the first place. This means reducing transaction contention overall, and increasing the likelihood that CockroachDB can automatically retry a failed transaction.
Unsupported SQL features
CockroachDB has support for most SQL features.
Additionally, CockroachDB supports the PostgreSQL wire protocol and the majority of its syntax. This means that existing applications can often be migrated to CockroachDB without changing application code.
However, you may encounter features of SQL or the PostgreSQL dialect that are not supported by CockroachDB. For example, the following PostgreSQL features are not supported:
- Stored procedures
- CockroachDB has support for user-defined functions.
Drop primary key.Note:
Each table must have a primary key associated with it. You can drop and add a primary key constraint within a single transaction.
Creating a database from a template.
Foreign data wrappers.
For more information about the differences between CockroachDB and PostgreSQL feature support, see PostgreSQL Compatibility.
For more information about the SQL standard features supported by CockroachDB, see SQL Feature Support.
Troubleshoot cluster problems
As a developer, you will mostly be working with the CockroachDB SQL API.
However, you may need to access the underlying cluster to troubleshoot issues where the root cause is not your SQL, but something happening at the cluster level. Symptoms of cluster-level issues can include:
- Cannot join a node to an existing cluster
- Networking, client connection, or authentication issues
- Clock sync, replication, or node liveness issues
- Capacity planning, storage, or memory issues
- Node decommissioning failures
For more information about how to troubleshoot cluster-level issues, see Troubleshoot Cluster Setup.
Troubleshoot SQL client application problems
High client CPU load, connection pool exhaustion, or increased connection latency when SCRAM Password-based Authentication is enabled
When SASL/SCRAM-SHA-256 Secure Password-based Authentication (SCRAM Authentication) is enabled on a cluster, some additional CPU load is incurred on client applications, which are responsible for handling SCRAM hashing. It's important to plan for this additional CPU load to avoid performance degradation, CPU starvation, and connection pool exhaustion on the client. For example, the following set of circumstances can exhaust the client application's resources:
- SCRAM Authentication is enabled on the cluster (the
server.user_login.password_encryptioncluster setting is set to
- The client driver's connection pool has no defined maximum number of connections, or is configured to close idle connections eagerly.
- The client application issues transactions concurrently.
In this situation, each new connection uses more CPU on the client application server than connecting to a cluster without SCRAM Authentication enabled. Because of this additional CPU load, each concurrent transaction is slower, and a larger quantity of concurrent transactions can accumulate, in conjunction with a larger number of concurrent connections. In this situation, it can be difficult for the client application server to recover.
Some applications may also see increased connection latency. This can happen because SCRAM incurs additional round trips during authentication which can add latency to the initial connection.
For more information about how SCRAM works, see SASL/SCRAM-SHA-256 Secure Password-based Authentication.
Mitigation steps while keeping SCRAM enabled
To mitigate against this situation while keeping SCRAM authentication enabled, Cockroach Labs recommends that you:
- Test and adjust your workloads in batches when migrating to SCRAM authentication.
- Start by enabling SCRAM authentication in a testing environment, and test the performance of your client application against the types of workloads you expect it to handle in production before rolling the changes out to production.
- Limit the maximum number of connections in the client driver's connection pool.
- Limit the maximum number of concurrent transactions the client application can issue.
If the above steps don't work, you can try lowering the default hashing cost and reapplying the password as described below.
Lower default hashing cost and reapply the password
To decrease the CPU usage of SCRAM password hashing while keeping SCRAM enabled:
server.user_login.password_hashes.default_cost.scram_sha_256cluster setting to
SET CLUSTER SETTING server.user_login.password_hashes.default_cost.scram_sha_256 = 4096;
Make sure the
server.user_login.rehash_scram_stored_passwords_on_cost_change.enabledcluster setting is set to
When lowering the default hashing cost, we recommend that you use strong, complex passwords for SQL users.
If you are still seeing higher connection latencies than before, you can downgrade from SCRAM authentication.
Downgrade from SCRAM authentication
As an alternative to the mitigation steps listed above, you can downgrade from SCRAM authentication to bcrypt as follows:
server.user_login.password_encryptioncluster setting to
SET CLUSTER SETTING server.user_login.password_encryption = 'crdb-bcrypt';
server.user_login.downgrade_scram_stored_passwords_to_bcrypt.enabledcluster setting is set to
SET CLUSTER SETTING server.user_login.downgrade_scram_stored_passwords_to_bcrypt.enabled = true;
server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled cluster setting can be left at its default value of
- Connect to a CockroachDB Cluster
- Run Multi-Statement Transactions
- Optimize Statement Performance Overview