On this page Carat arrow pointing down

How do I bulk insert data into CockroachDB?

Currently, you can bulk insert data with batches of INSERT statements not exceeding a few MB. The size of your rows determines how many you can use, but 1,000 - 10,000 rows typically works best. For more details, see Import Data.

How do I auto-generate unique row IDs in CockroachDB?

To auto-generate unique row IDs, use the SERIAL data type, which is an alias for INT with the unique_rowid() function as the default value:


On insert, the unique_rowid() function generates a default value from the timestamp and ID of the node executing the insert, a combination that is likely to be globally unique except in extreme cases where a very large number of IDs (100,000+) are generated per node per second. In such cases, you should use a BYTES column with the uuid_v4() function as the default value instead:


Because BYTES values are 128-bit, much larger than INT values at 64-bit, there is virtually no chance of generating non-unique values.

The distribution of IDs at the key-value level may also be a consideration. When using BYTES with uuid_v4() as the default value, consecutively generated IDs will be spread across different key-value ranges (and therefore likely across different nodes), whereas when using INT with unique_rowid() as the default value, consecutively generated IDs may end up in the same key-value range.

How do I get the last ID/SERIAL value inserted into a table?

There’s no function in CockroachDB for returning last inserted values, but you can use the RETURNING clause of the INSERT statement.

For example, this is how you’d use RETURNING to return an auto-generated SERIAL value:

> CREATE TABLE users (id SERIAL, name STRING);

> INSERT INTO users (name) VALUES ('mike') RETURNING id;

Does CockroachDB support JOIN?

CockroachDB has basic, non-optimized support for SQL JOIN, whose performance we're working to improve.

To learn more, see our blog posts on CockroachDB's JOINs: - Modesty in Simplicity: CockroachDB's JOIN. - On the Way to Better SQL Joins

When should I use interleaved tables?

Interleaving tables improves query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same key-value range if it's likely to be read and written together.

You're most likely to benefit from interleaved tables when:

Does CockroachDB support JSON or Protobuf datatypes?

Not currently, but we plan to offer JSON/Protobuf datatypes.

How do I know which index CockroachDB will select for a query?

To see which indexes CockroachDB is using for a given query, you can use the EXPLAIN statement, which will print out the query plan, including any indexes that are being used:


If you'd like to tell the query planner which index to use, you can do so via some special syntax for index hints:

> SELECT col1 FROM tbl1@idx1;

How do I log SQL queries?

For production clusters, the best way to log queries is to turn on the cluster-wide setting sql.trace.log_statement_execute:

> SET CLUSTER SETTING sql.trace.log_statement_execute = true;

With this setting on, each node of the cluster writes all SQL queries it executes to its log file. When you no longer need to log queries, you can turn the setting back off:

> SET CLUSTER SETTING sql.trace.log_statement_execute = false;

Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can pass --vmodule=executor=2 to the cockroach start command when starting the node. For example, to start a single node locally and log all SQL queries it executes, you'd run:

$ cockroach start --insecure --host=localhost --vmodule=executor=2

Does CockroachDB support a UUID type?

Not at this time, but storing a 16-byte array in a BYTES column should perform just as well.

How does CockroachDB sort results when ORDER BY is not used?

When an ORDER BY clause is not used in a SELECT query, retrieved rows are not sorted by any consistent criteria. Instead, CockroachDB returns them as the coordinating node receives them.

Why are my INT columns returned as strings in JavaScript?

In CockroachDB, all INTs are represented with 64 bits of precision, but JavaScript numbers only have 53 bits of precision. This means that large integers stored in CockroachDB are not exactly representable as JavaScript numbers. For example, JavaScript will round the integer 235191684988928001 to the nearest representable value, 235191684988928000. Notice that the last digit is different. This is particularly problematic when using the unique_rowid() function, since unique_rowid() nearly always returns integers that require more than 53 bits of precision to represent.

To avoid this loss of precision, Node's pg driver will, by default, return all CockroachDB INTs as strings.

// Schema: CREATE TABLE users (id INT DEFAULT unique_rowid(), name STRING);
pgClient.query("SELECT id FROM users WHERE name = 'Roach' LIMIT 1", function(err, res) {
  var idString = res.rows[0].id;
  // idString === '235191684988928001'
  // typeof idString === 'string'

To perform another query using the value of idString, you can simply use idString directly, even where an INT type is expected. The string will automatically be coerced into a CockroachDB INT.

pgClient.query("UPDATE users SET name = 'Ms. Roach' WHERE id = $1", [idString], function(err, res) {
  // All should be well!

If you instead need to perform arithmetic on INTs in JavaScript, you will need to use a big integer library like Long.js. Do not use the built-in parseInt function.

parseInt(idString, 10) + 1; // WRONG: returns 235191684988928000
require('long').fromString(idString).add(1).toString(); // GOOD: returns '235191684988928002'

See Also

Yes No
On this page

Yes No