CockroachDB supports importing data from
.sql dumps and some
Import from Generic SQL Dump
You can execute batches of
INSERT statements stored in
.sql files (including those generated by
cockroach dump) from the command line, importing data into your cluster.
$ cockroach sql --database=[database name] < statements.sql
INSERTstatement to include approximately 500 rows will provide the best performance.
Import from PostgreSQL Dump
If you’re importing data from a PostgreSQL deployment, you can import the
.sql file generated by the
pg_dump command to more quickly import data.
.sqlfiles generated by
pg_dumpprovide better performance because they use the
COPYstatement instead of bulk
Create PostgreSQL SQL File
pg_dump command you want to use depends on whether you want to import your entire database or only specific tables:
$ pg_dump [database] > [filename].sql
$ pg_dump -t [table] [table's schema] > [filename].sql
For more details, see PostgreSQL’s documentation on
Reformat SQL File
After generating the
.sql file, you need to perform a few editing steps before importing it:
- Remove all statements from the file besides the
Manually add the table’s
PRIMARY KEYconstraint to the
This has to be done manually because PostgreSQL attempts to add the primary key after creating the table, but CockroachDB requires the primary key be defined upon table creation.
- Review any other constraints to ensure they’re properly listed on the table.
- Remove any unsupported elements, such as arrays.
After reformatting the file, you can import it through
$ psql -p [port] -h [node host] -d [database] -U [user] < [file name].sql
For reference, CockroachDB uses these defaults:
Import from CSV
You can import numeric data stored in
.csv files by executing a bash script that reads values from the files and uses them in
.csvfile to a
.sqlfile (you can find free conversion software online), and then import the
This template reads 3 columns of numerical data, and converts them into
INSERT statements, but you can easily adapt the variables (
c) to any number of columns.
> \| IFS=","; while read a b c; do echo "INSERT INTO csv VALUES ($a, $b, $c);"; done < test.csv;
In this SQL shell example, use
\! to look at the rows in a CSV file before creating a table and then using
\| to insert those rows into the table.
> \! cat test.csv
12, 13, 14 10, 20, 30
> CREATE TABLE csv (x INT, y INT, z INT); > \| IFS=","; while read a b c; do echo "INSERT INTO csv VALUES ($a, $b, $c);"; done < test.csv; > SELECT * FROM csv;
+----+----+----+ | x | y | z | +----+----+----+ | 12 | 13 | 14 | | 10 | 20 | 30 | +----+----+----+