CockroachDB supports importing data from .sql and some .csv files, as well as from PostgreSQL.

We're also working to develop more robust ways to import data, such as from backups stored on cloud hosting providers.

Import from SQL File

You can execute batches of INSERT statements stored in .sql files from the command line, letting you import data into your cluster.

$ cockroach sql --database=[database name] < statements.sql
Grouping each INSERT statement to include approximately 500 rows will provide the best performance.

Import from PostgreSQL

If you’re importing data from a PostgreSQL deployment, you can import the .sql file generated by the pg_dump command to more quickly import data.

The .sql files generated by pg_dump provide better performance because they use the COPY statement instead of bulk INSERT statements.

Create PostgreSQL SQL File

Which pg_dump command you want to use depends on whether you want to import your entire database or only specific tables:

  • Entire database:

    $ pg_dump [database] > [filename].sql
  • Specific tables:

    $ pg_dump -t [table] [table's schema] > [filename].sql

For more details, see PostgreSQL’s documentation on pg_dump.

Reformat SQL File

After generating the .sql file, you need to perform a few editing steps before importing it:

  1. Remove all statements from the file besides the CREATE TABLE and COPY statements.
  2. Manually add the table’s PRIMARY KEY constraint to the CREATE TABLE statement.

    This has to be done manually because PostgreSQL attempts to add the primary key after creating the table, but CockroachDB requires the primary key be defined upon table creation.

  3. Review any other constraints to ensure they’re properly listed on the table.
  4. Remove any unsupported elements, such as arrays.

Import Data

After reformatting the file, you can import it through psql:

$ psql -p [port] -h [node host] -d [database] -U [user] < [file name].sql

For reference, CockroachDB uses these defaults:

  • [port]: 26257
  • [user]: root

Import from CSV

You can import numeric data stored in .csv files by executing a bash script that reads values from the files and uses them in INSERT statements.

To import non-numerical data, convert the .csv file to a .sql file (you can find free conversion software online), and then import the .sql file.


This template reads 3 columns of numerical data, and converts them into INSERT statements, but you can easily adapt the variables (a, b, c) to any number of columns.

> \| IFS=","; while read a b c; do echo "INSERT INTO csv VALUES ($a, $b, $c);"; done < test.csv;


In this SQL shell example, use \! to look at the rows in a CSV file before creating a table and then using \| to insert those rows into the table.

> \! cat test.csv
12, 13, 14
10, 20, 30
> CREATE TABLE csv (x INT, y INT, z INT);

> \| IFS=","; while read a b c; do echo "INSERT INTO csv VALUES ($a, $b, $c);"; done < test.csv;

> SELECT * FROM csv;
| x  | y  | z  |
| 12 | 13 | 14 |
| 10 | 20 | 30 |

See Also

Yes No