CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees ACID semantics spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with serializable isolation.
For a detailed discussion of CockroachDB transaction semantics, see How CockroachDB Does Distributed Atomic Transactions and Serializable, Lockless, Distributed: Isolation in CockroachDB. Note that the explanation of the transaction model described in this blog post is slightly out of date. See the Transaction Retries section for more details.
Each of the following SQL statements control transactions in some way.
||Initiate a transaction, as well as control its priority.|
||Control a transaction's priority.|
||Commit a regular transaction, or clear the connection after committing a transaction using the advanced retry protocol.|
||Abort a transaction and roll the database back to its state before the transaction began.|
||Display the current transaction settings.|
||(Advanced) Used to implement advanced client-side transaction retries, which can improve performance and avoid starvation when transactions are retried.|
||(Advanced) Commit a retryable transaction.|
||(Advanced) Handle retry errors by rolling back a transaction's changes and increasing its priority.|
The Advanced statements above are used to implement advanced client-side transaction retries, and are mostly of use to driver and ORM authors.
Application developers who are using a framework or library that does not have advanced retry logic built in should implement an application-level retry loop with exponential backoff as shown in Client-side intervention.
> BEGIN; > SAVEPOINT cockroach_restart; <transaction statements> > RELEASE SAVEPOINT cockroach_restart; > COMMIT;
At any time before it's committed, you can abort the transaction by executing the
Clients using transactions must also include logic to handle retries.
To handle errors in transactions, you should check for the following types of server-side errors:
|Retry Errors||Errors with the code
|Ambiguous Errors||Errors with the code
|SQL Errors||All other errors, which indicate that a statement in the transaction failed. For example, violating the
If you want to attempt the same set of statements again, you must begin a completely new transaction.
Transactions may require retries if they experience deadlock or read/write contention with other concurrent transactions which cannot be resolved without allowing potential serializable anomalies. (However, it's possible to mitigate read-write conflicts by performing reads using
AS OF SYSTEM TIME.)
There are two cases in which transaction retries occur:
- Automatic retries, which CockroachDB processes for you.
- Client-side intervention, which your application must handle.
CockroachDB automatically retries individual statements (implicit transactions) and transactions sent from the client as a single batch, as long as the size of the results being produced for the client, including protocol overhead, is less than 16KiB by default. Once that buffer overflows, CockroachDB starts streaming results back to the client, at which point automatic retries cannot be performed any more. As long as the results of a single statement or batch of statements are known to stay clear of this limit, the client does not need to worry about transaction retries.
You can change the results buffer size for all new sessions using the
sql.defaults.results_buffer.size cluster setting, or for a specific session using the
results_buffer_size session variable. Note, however, that decreasing the buffer size can increase the number of transaction retry errors a client receives, whereas increasing the buffer size can increase the delay until the client receives the first result row.
In future versions of CockroachDB, we plan on providing stronger guarantees for read-only queries that return at most one row, regardless of the size of that row.
Individual statements are treated as implicit transactions, and so they fall
under the rules described above. If the results are small enough, they will be
automatically retried. In particular,
INSERT/UPDATE/DELETE statements without
RETURNING clause are guaranteed to have minuscule result sizes.
For example, the following statement would be automatically retried by CockroachDB:
> DELETE FROM customers WHERE id = 1;
Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements, except when the accumulated results overflow the buffer mentioned above, in which case they are returned sooner and automatic retries can no longer be performed.
Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries:
When the client/driver is using the PostgreSQL Extended Query protocol, a batch is made up of all queries sent in between two
Syncmessages. Many drivers support such batches through explicit batching constructs.
When the client/driver is using the PostgreSQL Simple Query protocol, a batch is made up of semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried):
db.Exec( "BEGIN; DELETE FROM customers WHERE id = 1; DELETE orders WHERE customer = 1; COMMIT;" )
Within a batch of statements, CockroachDB infers that the statements are not conditional on the results of previous statements, so it can retry all of them. Of course, if the transaction relies on conditional logic (e.g., statement 2 is executed only for some results of statement 1), then the transaction cannot be all sent to CockroachDB as a single batch. In these common cases, CockroachDB cannot retry, say, statement 2 in isolation. Since results for statement 1 have already been delivered to the client by the time statement 2 is forcing the transaction to retry, the client needs to be involved in retrying the whole transaction and so you should write your transactions to use client-side intervention.
Your application should include client-side retry handling when the statements are sent individually, such as:
> BEGIN; > UPDATE products SET inventory = 0 WHERE sku = '8675309'; > INSERT INTO orders (customer, status) VALUES (1, 'new'); > COMMIT;
To indicate that a transaction must be retried, CockroachDB signals an error with the code
40001 and an error message that begins with the string
To handle these types of errors you have the following options:
- If your database library or framework provides a method for retryable transactions (it will often be documented as a tool for handling deadlocks), use it. If you're building an application in the following languages, we have code to make client-side retries simpler:
- Go developers can use the
github.com/cockroachdb/cockroach-go/crdbpackage, which handles retries automatically. For more information, see Build a Go App with CockroachDB.
- Python developers can use SQLAlchemy with the
sqlalchemy-cockroachdbadapter. For more information, see Build a Python App with CockroachDB.
- Java developers accessing the database with JDBC can re-use the example code implementing retry logic shown in Build a Java app with CockroachDB.
- Go developers can use the
- Most users, such as application authors: Abort the transaction using the
ROLLBACKstatement, and then reissue all of the statements in the transaction. For an example, see the Client-side intervention example.
- Advanced users, such as library authors: Use the
SAVEPOINTstatement to create retryable transactions. Retryable transactions can improve performance because their priority is increased each time they are retried, making them more likely to succeed the longer they're in your system. For instructions showing how to do this, see Advanced Client-Side Transaction Retries.
Client-side intervention example
The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement advanced retry handling logic, so it can be used from any programming language or environment. In particular, your retry loop must:
- Raise an error if the
max_retrieslimit is reached
- Retry on
COMMITat the end of the
- Implement exponential backoff logic as shown below for best performance
while true: n++ if n == max_retries: throw Error("did not succeed within N retries") try: # add logic here to run all your statements conn.exec('COMMIT') break catch error: if error.code != "40001": throw error else: # This is a retry error, so we roll back the current transaction # and sleep for a bit before retrying. The sleep time increases # for each failed transaction. Adapted from # https://colintemple.com/2017/03/java-exponential-backoff/ conn.exec('ROLLBACK'); sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1) sleep(sleep_ms) # Assumes your sleep() takes milliseconds
Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which must be retried.
For more details about transaction contention and best practices for avoiding contention, see Understanding and Avoiding Transaction Contention.
Every transaction in CockroachDB is assigned an initial priority. By default, the transaction priority is
NORMAL, but for transactions that should be given higher (or lower) preference in high-contention scenarios, you can set the priority in the
> BEGIN PRIORITY <LOW | NORMAL | HIGH>;
You can also set the priority immediately after a transaction is started:
> SET TRANSACTION PRIORITY <LOW | NORMAL | HIGH>;
To see the current priority of a transaction, use
SHOW TRANSACTION PRIORITY or
> SHOW transaction_priority;
transaction_priority +----------------------+ normal (1 row)
> SHOW TRANSACTION PRIORITY;
transaction_priority +----------------------+ normal (1 row)
transaction_priority is a read-only session variable that cannot be set directly.
When two transactions contend for the same resources indirectly, they may create a dependency cycle leading to a deadlock situation, where both transactions are waiting on the other to finish. In these cases, CockroachDB allows the transaction with higher priority to abort the other, which must then retry. On retry, the transaction inherits the higher priority. This means that each retry makes a transaction more likely to succeed in the event it again experiences deadlock.
CockroachDB executes all transactions at the strongest ANSI transaction isolation level:
SERIALIZABLE. All other ANSI transaction isolation levels (e.g.,
READ COMMITTED, and
REPEATABLE READ) are automatically upgraded to
SERIALIZABLE. Weaker isolation levels have historically been used to maximize transaction throughput. However, recent research has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks.
For a detailed discussion of isolation in CockroachDB transactions, see Serializable, Lockless, Distributed: Isolation in CockroachDB.
SERIALIZABLE isolation, a transaction behaves as though it has the entire database all to itself for the duration of its execution. This means that no concurrent writers can affect the transaction unless they commit before it starts, and no concurrent readers can be affected by the transaction until it has successfully committed. This is the strongest level of isolation provided by CockroachDB and it's the default.
SERIALIZABLE isolation permits no anomalies. To prevent write skew anomalies,
SERIALIZABLE isolation may require transaction restarts. For a demonstration of
SERIALIZABLE preventing write skew, see Serializable Transactions.
Comparison to ANSI SQL isolation levels
CockroachDB uses slightly different isolation levels than ANSI SQL isolation levels.
READ COMMITTED, and
REPEATABLE READ are aliases for
SERIALIZABLE level is stronger than the ANSI SQL
READ COMMITTED, and
REPEATABLE READ levels and equivalent to the ANSI SQL
For more information about the relationship between these levels, see A Critique of ANSI SQL Isolation Levels.