Insert Data

On this page Carat arrow pointing down

This page has instructions for getting data into CockroachDB with various programming languages, using the INSERT SQL statement.

Before you begin

Before reading this page, do the following:

Note:

Your application should use a retry loop to handle transaction errors that can occur under contention.

Insert rows

When inserting multiple rows, a single multi-row insert statement is faster than multiple single-row statements.

icon/buttons/copy
CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);
INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);

For more information about how to use the built-in SQL client, see the cockroach sql reference docs.

icon/buttons/copy
// 'db' is an open database connection

// Insert two rows into the "accounts" table.
if _, err := db.Exec(
    "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
    log.Fatal(err)
}

For complete examples, see:

icon/buttons/copy
// ds is an org.postgresql.ds.PGSimpleDataSource

try (Connection connection = ds.getConnection()) {
    connection.setAutoCommit(false);
    PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)");

    pstmt.setInt(1, 1);
    pstmt.setInt(2, 1000);
    pstmt.addBatch();

    pstmt.executeBatch();
    connection.commit();
} catch (SQLException e) {
    System.out.printf("sql state = [%s]\ncause = [%s]\nmessage = [%s]\n",
                      e.getSQLState(), e.getCause(), e.getMessage());
}

For complete examples, see:

icon/buttons/copy
# conn is a psycopg2 connection

with conn.cursor() as cur:
    cur.execute('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')

conn.commit()

For complete examples, see:

Bulk insert

If you need to get a lot of data into a CockroachDB cluster quickly, use the IMPORT statement instead of sending SQL INSERTs from application code. It will be much faster because it bypasses the SQL layer altogether and writes directly to the data store using low-level commands. For instructions, see the Migration Overview.

Limit the size of rows

To help you avoid failures arising from misbehaving applications that bloat the size of rows, you can specify the behavior when a row or individual column family larger than a specified size is written to the database. Use the cluster settings sql.guardrails.max_row_size_log to discover large rows and sql.guardrails.max_row_size_err to reject large rows.

When you write a row that exceeds sql.guardrails.max_row_size_log:

  • INSERT, UPSERT, UPDATE, CREATE TABLE AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, or RESTORE statements will log a LargeRow to the SQL_PERF channel.
  • SELECT, DELETE, TRUNCATE, and DROP are not affected.

When you write a row that exceeds sql.guardrails.max_row_size_err:

  • INSERT, UPSERT, and UPDATE statements will fail with a code 54000 (program_limit_exceeded) error.

  • CREATE TABLE AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, and RESTORE statements will log a LargeRowInternal event to the SQL_INTERNAL_PERF channel.

  • SELECT, DELETE, TRUNCATE, and DROP are not affected.

You cannot update existing rows that violate the limit unless the update shrinks the size of the row below the limit. You can select, delete, alter, back up, and restore such rows. We recommend using the accompanying setting sql.guardrails.max_row_size_log in conjunction with SELECT pg_column_size() queries to detect and fix any existing large rows before lowering sql.guardrails.max_row_size_err.

See also

Reference information related to this task:

Other common tasks:


Yes No
On this page

Yes No