IMPORT INTO only works for existing tables. For information on how to import data into new tables, see
Only members of the
admin role can run
IMPORT INTO. By default, the
root user belongs to the
While importing into an existing table, the table is taken offline.
||The name of the table you want to import into.|
||The table columns you want to import.
Note: Currently, target columns are not enforced.
||The URL of a CSV file containing the table data. This can be a comma-separated list of URLs to CSV files. For an example, see Import into an existing table from multiple CSV files below.|
||Control your import's behavior with these options.|
Import file URLs
URLs for the files you want to import must use the format shown below. For examples, see Example file URLs.
||N/A (see Example file URLs||
|Google Cloud 2||
|S3-compatible services 6||
If you write to
nodelocal storage in a multi-node cluster, individual data files will be written to the
extern directories of arbitrary nodes and will likely not work as intended. To work correctly, each node must have the
--external-io-dir flag point to the same NFS mount or other network-backed, shared storage.
If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard
HTTPS_PROXY environment variables when starting CockroachDB.
1 If the
AUTHparameter is not provided, AWS connections default to
specifiedand the access keys must be provided in the URI parameters. If the
implicit, the access keys can be ommitted and the credentials will be loaded from the environment.
2 If the
AUTHparameter is not specified, the
cloudstorage.gs.default.keycluster setting will be used if it is non-empty, otherwise the
implicitbehavior is used. If the
implicit, all GCS connections use Google's default authentication strategy. If the
cloudstorage.gs.default.keycluster setting must be set to the contents of a service account file which will be used during authentication. If the
specified, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the
CREDENTIALSparameter. The JSON key object should be base64-encoded (using the standard encoding in RFC 4648).
3 You can create your own HTTP server with Caddy or nginx. A custom root CA can be appended to the system's default CAs by setting the
cloudstorage.http.custom_cacluster setting, which will be used when verifying certificates from HTTPS URLs.
4 The file system backup location on the NFS drive is relative to the path specified by the
--external-io-dirflag set while starting the node. If the flag is set to
disabled, then imports from local directories and NFS drives are disabled.
5 The host component of NFS/Local can either be empty or the
nodeID. If the
nodeIDis specified, it is currently ignored (i.e., any node can be sent work and it will look in its local input/output directory); however, the
nodeIDwill likely be required in the future.
6 A custom root CA can be appended to the system's default CAs by setting the
cloudstorage.http.custom_cacluster setting, which will be used when verifying certificates from an S3-compatible service.
AWS_REGIONparameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
Example file URLs
Note: If you write to
You can control the
IMPORT process's behavior using any of the following key-value pairs as a
<option> [= <value>].
||The unicode character that delimits columns in your rows.
|No||To use tab-delimited values:
||The unicode character that identifies rows to skip.||No||
||The string that should be converted to NULL.||No||To use empty columns as NULL:
||The number of rows to be skipped while importing a file.
|No||To import CSV files with column headers:
||The decompression codec to be used:
For examples showing how to use these options, see the
IMPORT - Examples section.
IMPORT INTO, you should have:
- An existing table to import into (use
- The CSV data you want to import, preferably hosted on cloud storage. This location must be equally accessible to all nodes using the same import file location. This is necessary because the
IMPORT INTOstatement is issued once by the client, but is executed concurrently across all nodes of the cluster. For more information, see the Import file location section below.
Each node in the cluster is assigned an equal part of the imported data, and so must have enough temp space to store it. In addition, data is persisted as a normal table, and so there must also be enough space to hold the final, replicated data. The node's first-listed/default
store directory must have enough available storage to hold its portion of the data.
cockroach start, if you set
--max-disk-temp-storage, it must also be greater than the portion of the data a node will store in temp space.
Import file location
We strongly recommend using cloud/remote storage (Amazon S3, Google Cloud Platform, etc.) for the data you want to import.
Local files are supported; however, they must be accessible to all nodes in the cluster using identical Import file URLs.
To import a local file, you have the following options:
Option 1. Run a local file server to make the file accessible from all nodes.
Option 2. Make the file accessible from each local node's store:
- Create an
externdirectory on each node's store. The pathname will differ depending on the
--storeflag passed to
cockroach start(if any), but will look something like
- Copy the file to each node's
- Assuming the file is called
data.sql, you can access it in your
IMPORTstatement using the following import file URL:
- Create an
All nodes are used during the import job, which means all nodes' CPU and RAM will be partially consumed by the
IMPORT task in addition to serving normal traffic.
Viewing and controlling import jobs
After CockroachDB successfully initiates an import into an existing table, it registers the import as a job, which you can view with
If initiated correctly, the statement returns when the import is finished or if it encounters an error. In some cases, the import can continue after an error has been returned (the error message will tell you that the import has resumed in background).
Pausing and then resuming an
IMPORT INTO job will cause it to restart from the beginning.
Import into an existing table from a CSV file
> IMPORT INTO customers (id, name) CSV DATA ( 's3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]&AWS_SESSION_TOKEN=[placeholder]' );
> IMPORT INTO customers (id, name) CSV DATA ( 'azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' );
> IMPORT INTO customers (id, name) CSV DATA ( 'gs://acme-co/customers.csv' );
Import into an existing table from multiple CSV files
> IMPORT INTO customers (id, name) CSV DATA ( 's3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', 's3://acme-co/customers2.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder', 's3://acme-co/customers3.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', 's3://acme-co/customers4.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', );
> IMPORT INTO customers (id, name) CSV DATA ( 'azure://acme-co/customer-import-data1.1.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', 'azure://acme-co/customer-import-data1.2.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', 'azure://acme-co/customer-import-data1.3.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', 'azure://acme-co/customer-import-data1.4.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', 'azure://acme-co/customer-import-data1.5.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', );
> IMPORT INTO customers (id, name) CSV DATA ( 'gs://acme-co/customers.csv', 'gs://acme-co/customers2.csv', 'gs://acme-co/customers3.csv', 'gs://acme-co/customers4.csv', );
- While importing into an existing table, the table is taken offline.
- After importing into an existing table, constraints will be un-validated and need to be re-validated.
- Imported rows must not conflict with existing rows in the table or any unique secondary indexes.
IMPORT INTOworks for only a single existing table, and the table must not be interleaved.
IMPORT INTOcannot be used within a transaction.
IMPORT INTOcan sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the
kv.bulk_io_write.max_ratecluster setting to a value below your max disk write speed. For example, to set it to 10MB/s, execute:copy
> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';