Build a Java App with CockroachDB and JDBC

On this page Carat arrow pointing down
Warning:
CockroachDB v20.1 is no longer supported. For more details, see the Release Support Policy.

This tutorial shows you how to build a simple Java application with CockroachDB and the Java JDBC driver.

Note:

We recommend using Java versions 8+ with CockroachDB.

Tip:

For a sample app and tutorial that uses Spring Data JDBC and CockroachDB, see Build a Spring App with CockroachDB and JDBC.

Before you begin

  1. Install CockroachDB.
  2. Start up a secure or insecure local cluster.
  3. Choose the instructions that correspond to whether your cluster is secure or insecure:

Step 1. Install the Java JDBC driver

Download and set up the Java JDBC driver as described in the official documentation. We recommend using the PostgreSQL JDBC 42.2.9 driver.

Step 2. Create the maxroach user and bank database

Start the built-in SQL shell:

icon/buttons/copy
$ cockroach sql --certs-dir=certs

In the SQL shell, issue the following statements to create the maxroach user and bank database:

icon/buttons/copy
> CREATE USER IF NOT EXISTS maxroach;
icon/buttons/copy
> CREATE DATABASE bank;

Give the maxroach user the necessary permissions:

icon/buttons/copy
> GRANT ALL ON DATABASE bank TO maxroach;

Exit the SQL shell:

icon/buttons/copy
> \q

Step 3. Generate a certificate for the maxroach user

Create a certificate and key for the maxroach user by running the following command. The code samples will run as this user.

New in v19.1: You can pass the --also-generate-pkcs8-key flag to generate a key in PKCS#8 format, which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named client.maxroach.key.pk8.

icon/buttons/copy
$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key

Step 4. Run the Java code

The code below uses JDBC and the Data Access Object (DAO) pattern to map Java methods to SQL operations. It consists of two classes:

  1. BasicExample, which is where the application logic lives.
  2. BasicExampleDAO, which is used by the application to access the data store (in this case CockroachDB). This class has logic to handle transaction retries (see the BasicExampleDAO.runSQL() method).

It performs the following steps which roughly correspond to method calls in the BasicExample class.

Step Method
1. Create an accounts table in the bank database BasicExampleDAO.createAccounts()
2. Insert account data using a Map that corresponds to the input to INSERT on the backend BasicExampleDAO.updateAccounts(Map balance)
3. Transfer money from one account to another, printing out account balances before and after the transfer BasicExampleDAO.transferFunds(int from, int to, int amount)
4. Insert random account data using JDBC's bulk insertion support BasicExampleDAO.bulkInsertRandomAccountData()
5. Print out some account data BasicExampleDAO.readAccounts(int limit)
6. Drop the accounts table and perform any other necessary cleanup BasicExampleDAO.tearDown() (This cleanup step means you can run this program more than once.)

It does all of the above using the practices we recommend for using JDBC with CockroachDB, which are listed in the Recommended Practices section below.

To run it:

  1. Download BasicExample.java, or create the file yourself and copy the code below.
  2. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath):

    icon/buttons/copy
    $ javac -classpath .:/path/to/postgresql.jar BasicExample.java
    
    icon/buttons/copy
    $ java -classpath .:/path/to/postgresql.jar BasicExample
    

The contents of BasicExample.java:

icon/buttons/copy
import java.util.*;
import java.time.*;
import java.sql.*;
import javax.sql.DataSource;

import org.postgresql.ds.PGSimpleDataSource;

/*
  Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.

  Then, compile and run this example like so:

  $ export CLASSPATH=.:/path/to/postgresql.jar
  $ javac BasicExample.java && java BasicExample

  To build the javadoc:

  $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java

  At a high level, this code consists of two classes:

  1. BasicExample, which is where the application logic lives.

  2. BasicExampleDAO, which is used by the application to access the
     data store.

*/

public class BasicExample {

    public static void main(String[] args) {

        // Configure the database connection.
        PGSimpleDataSource ds = new PGSimpleDataSource();
        ds.setServerName("localhost");
        ds.setPortNumber(26257);
        ds.setDatabaseName("bank");
        ds.setUser("maxroach");
        ds.setPassword(null);
        ds.setSsl(true);
        ds.setSslMode("require");
        ds.setSslRootCert("certs/client.root.crt");
        ds.setSslCert("certs/client.maxroach.crt");
        ds.setSslKey("certs/client.maxroach.key.pk8");
        ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
        ds.setApplicationName("BasicExample");

        // Create DAO.
        BasicExampleDAO dao = new BasicExampleDAO(ds);

        // Test our retry handling logic if FORCE_RETRY is true.  This
        // method is only used to test the retry logic.  It is not
        // necessary in production code.
        dao.testRetryHandling();

        // Set up the 'accounts' table.
        dao.createAccounts();

        // Insert a few accounts "by hand", using INSERTs on the backend.
        Map<String, String> balances = new HashMap<String, String>();
        balances.put("1", "1000");
        balances.put("2", "250");
        int updatedAccounts = dao.updateAccounts(balances);
        System.out.printf("BasicExampleDAO.updateAccounts:\n    => %s total updated accounts\n", updatedAccounts);

        // How much money is in these accounts?
        int balance1 = dao.getAccountBalance(1);
        int balance2 = dao.getAccountBalance(2);
        System.out.printf("main:\n    => Account balances at time '%s':\n    ID %s => $%s\n    ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);

        // Transfer $100 from account 1 to account 2
        int fromAccount = 1;
        int toAccount = 2;
        int transferAmount = 100;
        int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
        if (transferredAccounts != -1) {
            System.out.printf("BasicExampleDAO.transferFunds:\n    => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
        }

        balance1 = dao.getAccountBalance(1);
        balance2 = dao.getAccountBalance(2);
        System.out.printf("main:\n    => Account balances at time '%s':\n    ID %s => $%s\n    ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);

        // Bulk insertion example using JDBC's batching support.
        int totalRowsInserted = dao.bulkInsertRandomAccountData();
        System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n    => finished, %s total rows inserted\n", totalRowsInserted);

        // Print out 10 account values.
        int accountsRead = dao.readAccounts(10);

        // Drop the 'accounts' table so this code can be run again.
        dao.tearDown();
    }
}

/**
 * Data access object used by 'BasicExample'.  Abstraction over some
 * common CockroachDB operations, including:
 *
 * - Auto-handling transaction retries in the 'runSQL' method
 *
 * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
 *   method
 */

class BasicExampleDAO {

    private static final int MAX_RETRY_COUNT = 3;
    private static final String SAVEPOINT_NAME = "cockroach_restart";
    private static final String RETRY_SQL_STATE = "40001";
    private static final boolean FORCE_RETRY = false;

    private final DataSource ds;

    BasicExampleDAO(DataSource ds) {
        this.ds = ds;
    }

    /**
       Used to test the retry logic in 'runSQL'.  It is not necessary
       in production code.
    */
    void testRetryHandling() {
        if (this.FORCE_RETRY) {
            runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
        }
    }

    /**
     * Run SQL code in a way that automatically handles the
     * transaction retry logic so we do not have to duplicate it in
     * various places.
     *
     * @param sqlCode a String containing the SQL code you want to
     * execute.  Can have placeholders, e.g., "INSERT INTO accounts
     * (id, balance) VALUES (?, ?)".
     *
     * @param args String Varargs to fill in the SQL code's
     * placeholders.
     * @return Integer Number of rows updated, or -1 if an error is thrown.
     */
    public Integer runSQL(String sqlCode, String... args) {

        // This block is only used to emit class and method names in
        // the program output.  It is not necessary in production
        // code.
        StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
        StackTraceElement elem = stacktrace[2];
        String callerClass = elem.getClassName();
        String callerMethod = elem.getMethodName();

        int rv = 0;

        try (Connection connection = ds.getConnection()) {

            // We're managing the commit lifecycle ourselves so we can
            // automatically issue transaction retries.
            connection.setAutoCommit(false);

            int retryCount = 0;

            while (retryCount < MAX_RETRY_COUNT) {

                Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);

                // This block is only used to test the retry logic.
                // It is not necessary in production code.  See also
                // the method 'testRetryHandling()'.
                if (FORCE_RETRY) {
                    forceRetry(connection); // SELECT 1
                }

                try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {

                    // Loop over the args and insert them into the
                    // prepared statement based on their types.  In
                    // this simple example we classify the argument
                    // types as "integers" and "everything else"
                    // (a.k.a. strings).
                    for (int i=0; i<args.length; i++) {
                        int place = i + 1;
                        String arg = args[i];

                        try {
                            int val = Integer.parseInt(arg);
                            pstmt.setInt(place, val);
                        } catch (NumberFormatException e) {
                            pstmt.setString(place, arg);
                        }
                    }

                    if (pstmt.execute()) {
                        // We know that `pstmt.getResultSet()` will
                        // not return `null` if `pstmt.execute()` was
                        // true
                        ResultSet rs = pstmt.getResultSet();
                        ResultSetMetaData rsmeta = rs.getMetaData();
                        int colCount = rsmeta.getColumnCount();

                        // This printed output is for debugging and/or demonstration
                        // purposes only.  It would not be necessary in production code.
                        System.out.printf("\n%s.%s:\n    '%s'\n", callerClass, callerMethod, pstmt);

                        while (rs.next()) {
                            for (int i=1; i <= colCount; i++) {
                                String name = rsmeta.getColumnName(i);
                                String type = rsmeta.getColumnTypeName(i);

                                // In this "bank account" example we know we are only handling
                                // integer values (technically 64-bit INT8s, the CockroachDB
                                // default).  This code could be made into a switch statement
                                // to handle the various SQL types needed by the application.
                                if (type == "int8") {
                                    int val = rs.getInt(name);

                                    // This printed output is for debugging and/or demonstration
                                    // purposes only.  It would not be necessary in production code.
                                    System.out.printf("    %-8s => %10s\n", name, val);
                                }
                            }
                        }
                    } else {
                        int updateCount = pstmt.getUpdateCount();
                        rv += updateCount;

                        // This printed output is for debugging and/or demonstration
                        // purposes only.  It would not be necessary in production code.
                        System.out.printf("\n%s.%s:\n    '%s'\n", callerClass, callerMethod, pstmt);
                    }

                    connection.releaseSavepoint(sp);
                    connection.commit();
                    break;

                } catch (SQLException e) {

                    if (RETRY_SQL_STATE.equals(e.getSQLState())) {
                        System.out.printf("retryable exception occurred:\n    sql state = [%s]\n    message = [%s]\n    retry counter = %s\n",
                                          e.getSQLState(), e.getMessage(), retryCount);
                        connection.rollback(sp);
                        retryCount++;
                        rv = -1;
                    } else {
                        rv = -1;
                        throw e;
                    }
                }
            }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
            rv = -1;
        }

        return rv;
    }

    /**
     * Helper method called by 'testRetryHandling'.  It simply issues
     * a "SELECT 1" inside the transaction to force a retry.  This is
     * necessary to take the connection's session out of the AutoRetry
     * state, since otherwise the other statements in the session will
     * be retried automatically, and the client (us) will not see a
     * retry error. Note that this information is taken from the
     * following test:
     * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
     *
     * @param connection Connection
     */
    private void forceRetry(Connection connection) throws SQLException {
        try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
            statement.executeQuery();
        }
    }

    /**
     * Creates a fresh, empty accounts table in the database.
     */
    public void createAccounts() {
        runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
    };

    /**
     * Update accounts by passing in a Map of (ID, Balance) pairs.
     *
     * @param accounts (Map)
     * @return The number of updated accounts (int)
     */
    public int updateAccounts(Map<String, String> accounts) {
        int rows = 0;
        for (Map.Entry<String, String> account : accounts.entrySet()) {

            String k = account.getKey();
            String v = account.getValue();

            String[] args = {k, v};
            rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
        }
        return rows;
    }

    /**
     * Transfer funds between one account and another.  Handles
     * transaction retries in case of conflict automatically on the
     * backend.
     * @param fromId (int)
     * @param toId (int)
     * @param amount (int)
     * @return The number of updated accounts (int)
     */
    public int transferFunds(int fromId, int toId, int amount) {
            String sFromId = Integer.toString(fromId);
            String sToId = Integer.toString(toId);
            String sAmount = Integer.toString(amount);

            // We have omitted explicit BEGIN/COMMIT statements for
            // brevity.  Individual statements are treated as implicit
            // transactions by CockroachDB (see
            // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).

            String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
                "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
                "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";

            return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
    }

    /**
     * Get the account balance for one account.
     *
     * We skip using the retry logic in 'runSQL()' here for the
     * following reasons:
     *
     * 1. Since this is a single read ("SELECT"), we do not expect any
     *    transaction conflicts to handle
     *
     * 2. We need to return the balance as an integer
     *
     * @param id (int)
     * @return balance (int)
     */
    public int getAccountBalance(int id) {
        int balance = 0;

        try (Connection connection = ds.getConnection()) {

                // Check the current balance.
                ResultSet res = connection.createStatement()
                    .executeQuery("SELECT balance FROM accounts WHERE id = "
                                  + id);
                if(!res.next()) {
                    System.out.printf("No users in the table with id %i", id);
                } else {
                    balance = res.getInt("balance");
                }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
        }

        return balance;
    }

    /**
     * Insert randomized account data (ID, balance) using the JDBC
     * fast path for bulk inserts.  The fastest way to get data into
     * CockroachDB is the IMPORT statement.  However, if you must bulk
     * ingest from the application using INSERT statements, the best
     * option is the method shown here. It will require the following:
     *
     * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
     *    settings (see the connection info in 'BasicExample.main').
     *
     * 2. Inserting in batches of 128 rows, as used inside this method
     *    (see BATCH_SIZE), since the PGJDBC driver's logic works best
     *    with powers of two, such that a batch of size 128 can be 6x
     *    faster than a batch of size 250.
     * @return The number of new accounts inserted (int)
     */
    public int bulkInsertRandomAccountData() {

        Random random = new Random();
        int BATCH_SIZE = 128;
        int totalNewAccounts = 0;

        try (Connection connection = ds.getConnection()) {

            // We're managing the commit lifecycle ourselves so we can
            // control the size of our batch inserts.
            connection.setAutoCommit(false);

            // In this example we are adding 500 rows to the database,
            // but it could be any number.  What's important is that
            // the batch size is 128.
            try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
                for (int i=0; i<=(500/BATCH_SIZE);i++) {
                    for (int j=0; j<BATCH_SIZE; j++) {
                        int id = random.nextInt(1000000000);
                        int balance = random.nextInt(1000000000);
                        pstmt.setInt(1, id);
                        pstmt.setInt(2, balance);
                        pstmt.addBatch();
                    }
                    int[] count = pstmt.executeBatch();
                    totalNewAccounts += count.length;
                    System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n    '%s'\n", pstmt.toString());
                    System.out.printf("    => %s row(s) updated in this batch\n", count.length);
                }
                connection.commit();
            } catch (SQLException e) {
                System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
                                  e.getSQLState(), e.getCause(), e.getMessage());
            }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
        }
        return totalNewAccounts;
    }

    /**
     * Read out a subset of accounts from the data store.
     *
     * @param limit (int)
     * @return Number of accounts read (int)
     */
    public int readAccounts(int limit) {
        return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
    }

    /**
     * Perform any necessary cleanup of the data store so it can be
     * used again.
     */
    public void tearDown() {
        runSQL("DROP TABLE accounts;");
    }
}

Step 2. Create the maxroach user and bank database

Start the built-in SQL shell:

icon/buttons/copy
$ cockroach sql --insecure

In the SQL shell, issue the following statements to create the maxroach user and bank database:

icon/buttons/copy
> CREATE USER IF NOT EXISTS maxroach;
icon/buttons/copy
> CREATE DATABASE bank;

Give the maxroach user the necessary permissions:

icon/buttons/copy
> GRANT ALL ON DATABASE bank TO maxroach;

Exit the SQL shell:

icon/buttons/copy
> \q

Step 3. Run the Java code

The code below uses JDBC and the Data Access Object (DAO) pattern to map Java methods to SQL operations. It consists of two classes:

  1. BasicExample, which is where the application logic lives.
  2. BasicExampleDAO, which is used by the application to access the data store (in this case CockroachDB). This class has logic to handle transaction retries (see the BasicExampleDAO.runSQL() method).

It performs the following steps which roughly correspond to method calls in the BasicExample class.

  1. Create an accounts table in the bank database (BasicExampleDAO.createAccounts()).
  2. Insert account data using a Map that corresponds to the input to INSERT on the backend (BasicExampleDAO.updateAccounts(Map balance)).
  3. Transfer money from one account to another, printing out account balances before and after the transfer (BasicExampleDAO.transferFunds(int from, int to, int amount)).
  4. Insert random account data using JDBC's bulk insertion support (BasicExampleDAO.bulkInsertRandomAccountData()).
  5. Print out (some) account data (BasicExampleDAO.readAccounts(int limit)).
  6. Drop the accounts table and perform any other necessary cleanup (BasicExampleDAO.tearDown()). (Note that you can run this program as many times as you want due to this cleanup step.)

It does all of the above using the practices we recommend for using JDBC with CockroachDB, which are listed in the Recommended Practices section below.

To run it:

  1. Download the PostgreSQL JDBC driver.
  2. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath):

    icon/buttons/copy
    $ javac -classpath .:/path/to/postgresql.jar BasicExample.java
    
    icon/buttons/copy
    $ java -classpath .:/path/to/postgresql.jar BasicExample
    

The contents of BasicExample.java:

icon/buttons/copy
import java.util.*;
import java.time.*;
import java.sql.*;
import javax.sql.DataSource;

import org.postgresql.ds.PGSimpleDataSource;

/*
  Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.

  Then, compile and run this example like so:

  $ export CLASSPATH=.:/path/to/postgresql.jar
  $ javac BasicExample.java && java BasicExample

  To build the javadoc:

  $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java

  At a high level, this code consists of two classes:

  1. BasicExample, which is where the application logic lives.

  2. BasicExampleDAO, which is used by the application to access the
     data store.

*/

public class BasicExample {

    public static void main(String[] args) {

        // Configure the database connection.
        PGSimpleDataSource ds = new PGSimpleDataSource();
        ds.setServerName("localhost");
        ds.setPortNumber(26257);
        ds.setDatabaseName("bank");
        ds.setUser("maxroach");
        ds.setPassword(null);
        ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string
        ds.setApplicationName("BasicExample");

        // Create DAO.
        BasicExampleDAO dao = new BasicExampleDAO(ds);

        // Test our retry handling logic if FORCE_RETRY is true.  This
        // method is only used to test the retry logic.  It is not
        // necessary in production code.
        dao.testRetryHandling();

        // Set up the 'accounts' table.
        dao.createAccounts();

        // Insert a few accounts "by hand", using INSERTs on the backend.
        Map<String, String> balances = new HashMap<String, String>();
        balances.put("1", "1000");
        balances.put("2", "250");
        int updatedAccounts = dao.updateAccounts(balances);
        System.out.printf("BasicExampleDAO.updateAccounts:\n    => %s total updated accounts\n", updatedAccounts);

        // How much money is in these accounts?
        int balance1 = dao.getAccountBalance(1);
        int balance2 = dao.getAccountBalance(2);
        System.out.printf("main:\n    => Account balances at time '%s':\n    ID %s => $%s\n    ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);

        // Transfer $100 from account 1 to account 2
        int fromAccount = 1;
        int toAccount = 2;
        int transferAmount = 100;
        int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount);
        if (transferredAccounts != -1) {
            System.out.printf("BasicExampleDAO.transferFunds:\n    => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts);
        }

        balance1 = dao.getAccountBalance(1);
        balance2 = dao.getAccountBalance(2);
        System.out.printf("main:\n    => Account balances at time '%s':\n    ID %s => $%s\n    ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2);

        // Bulk insertion example using JDBC's batching support.
        int totalRowsInserted = dao.bulkInsertRandomAccountData();
        System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n    => finished, %s total rows inserted\n", totalRowsInserted);

        // Print out 10 account values.
        int accountsRead = dao.readAccounts(10);

        // Drop the 'accounts' table so this code can be run again.
        dao.tearDown();
    }
}

/**
 * Data access object used by 'BasicExample'.  Abstraction over some
 * common CockroachDB operations, including:
 *
 * - Auto-handling transaction retries in the 'runSQL' method
 *
 * - Example of bulk inserts in the 'bulkInsertRandomAccountData'
 *   method
 */

class BasicExampleDAO {

    private static final int MAX_RETRY_COUNT = 3;
    private static final String SAVEPOINT_NAME = "cockroach_restart";
    private static final String RETRY_SQL_STATE = "40001";
    private static final boolean FORCE_RETRY = false;

    private final DataSource ds;

    BasicExampleDAO(DataSource ds) {
        this.ds = ds;
    }

    /**
       Used to test the retry logic in 'runSQL'.  It is not necessary
       in production code.
    */
    void testRetryHandling() {
        if (this.FORCE_RETRY) {
            runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)");
        }
    }

    /**
     * Run SQL code in a way that automatically handles the
     * transaction retry logic so we do not have to duplicate it in
     * various places.
     *
     * @param sqlCode a String containing the SQL code you want to
     * execute.  Can have placeholders, e.g., "INSERT INTO accounts
     * (id, balance) VALUES (?, ?)".
     *
     * @param args String Varargs to fill in the SQL code's
     * placeholders.
     * @return Integer Number of rows updated, or -1 if an error is thrown.
     */
    public Integer runSQL(String sqlCode, String... args) {

        // This block is only used to emit class and method names in
        // the program output.  It is not necessary in production
        // code.
        StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
        StackTraceElement elem = stacktrace[2];
        String callerClass = elem.getClassName();
        String callerMethod = elem.getMethodName();

        int rv = 0;

        try (Connection connection = ds.getConnection()) {

            // We're managing the commit lifecycle ourselves so we can
            // automatically issue transaction retries.
            connection.setAutoCommit(false);

            int retryCount = 0;

            while (retryCount < MAX_RETRY_COUNT) {

                Savepoint sp = connection.setSavepoint(SAVEPOINT_NAME);

                // This block is only used to test the retry logic.
                // It is not necessary in production code.  See also
                // the method 'testRetryHandling()'.
                if (FORCE_RETRY) {
                    forceRetry(connection); // SELECT 1
                }

                try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) {

                    // Loop over the args and insert them into the
                    // prepared statement based on their types.  In
                    // this simple example we classify the argument
                    // types as "integers" and "everything else"
                    // (a.k.a. strings).
                    for (int i=0; i<args.length; i++) {
                        int place = i + 1;
                        String arg = args[i];

                        try {
                            int val = Integer.parseInt(arg);
                            pstmt.setInt(place, val);
                        } catch (NumberFormatException e) {
                            pstmt.setString(place, arg);
                        }
                    }

                    if (pstmt.execute()) {
                        // We know that `pstmt.getResultSet()` will
                        // not return `null` if `pstmt.execute()` was
                        // true
                        ResultSet rs = pstmt.getResultSet();
                        ResultSetMetaData rsmeta = rs.getMetaData();
                        int colCount = rsmeta.getColumnCount();

                        // This printed output is for debugging and/or demonstration
                        // purposes only.  It would not be necessary in production code.
                        System.out.printf("\n%s.%s:\n    '%s'\n", callerClass, callerMethod, pstmt);

                        while (rs.next()) {
                            for (int i=1; i <= colCount; i++) {
                                String name = rsmeta.getColumnName(i);
                                String type = rsmeta.getColumnTypeName(i);

                                // In this "bank account" example we know we are only handling
                                // integer values (technically 64-bit INT8s, the CockroachDB
                                // default).  This code could be made into a switch statement
                                // to handle the various SQL types needed by the application.
                                if (type == "int8") {
                                    int val = rs.getInt(name);

                                    // This printed output is for debugging and/or demonstration
                                    // purposes only.  It would not be necessary in production code.
                                    System.out.printf("    %-8s => %10s\n", name, val);
                                }
                            }
                        }
                    } else {
                        int updateCount = pstmt.getUpdateCount();
                        rv += updateCount;

                        // This printed output is for debugging and/or demonstration
                        // purposes only.  It would not be necessary in production code.
                        System.out.printf("\n%s.%s:\n    '%s'\n", callerClass, callerMethod, pstmt);
                    }

                    connection.releaseSavepoint(sp);
                    connection.commit();
                    break;

                } catch (SQLException e) {

                    if (RETRY_SQL_STATE.equals(e.getSQLState())) {
                        System.out.printf("retryable exception occurred:\n    sql state = [%s]\n    message = [%s]\n    retry counter = %s\n",
                                          e.getSQLState(), e.getMessage(), retryCount);
                        connection.rollback(sp);
                        retryCount++;
                        rv = -1;
                    } else {
                        rv = -1;
                        throw e;
                    }
                }
            }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
            rv = -1;
        }

        return rv;
    }

    /**
     * Helper method called by 'testRetryHandling'.  It simply issues
     * a "SELECT 1" inside the transaction to force a retry.  This is
     * necessary to take the connection's session out of the AutoRetry
     * state, since otherwise the other statements in the session will
     * be retried automatically, and the client (us) will not see a
     * retry error. Note that this information is taken from the
     * following test:
     * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry
     *
     * @param connection Connection
     */
    private void forceRetry(Connection connection) throws SQLException {
        try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){
            statement.executeQuery();
        }
    }

    /**
     * Creates a fresh, empty accounts table in the database.
     */
    public void createAccounts() {
        runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))");
    };

    /**
     * Update accounts by passing in a Map of (ID, Balance) pairs.
     *
     * @param accounts (Map)
     * @return The number of updated accounts (int)
     */
    public int updateAccounts(Map<String, String> accounts) {
        int rows = 0;
        for (Map.Entry<String, String> account : accounts.entrySet()) {

            String k = account.getKey();
            String v = account.getValue();

            String[] args = {k, v};
            rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args);
        }
        return rows;
    }

    /**
     * Transfer funds between one account and another.  Handles
     * transaction retries in case of conflict automatically on the
     * backend.
     * @param fromId (int)
     * @param toId (int)
     * @param amount (int)
     * @return The number of updated accounts (int)
     */
    public int transferFunds(int fromId, int toId, int amount) {
            String sFromId = Integer.toString(fromId);
            String sToId = Integer.toString(toId);
            String sAmount = Integer.toString(amount);

            // We have omitted explicit BEGIN/COMMIT statements for
            // brevity.  Individual statements are treated as implicit
            // transactions by CockroachDB (see
            // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements).

            String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" +
                "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," +
                "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))";

            return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount);
    }

    /**
     * Get the account balance for one account.
     *
     * We skip using the retry logic in 'runSQL()' here for the
     * following reasons:
     *
     * 1. Since this is a single read ("SELECT"), we do not expect any
     *    transaction conflicts to handle
     *
     * 2. We need to return the balance as an integer
     *
     * @param id (int)
     * @return balance (int)
     */
    public int getAccountBalance(int id) {
        int balance = 0;

        try (Connection connection = ds.getConnection()) {

                // Check the current balance.
                ResultSet res = connection.createStatement()
                    .executeQuery("SELECT balance FROM accounts WHERE id = "
                                  + id);
                if(!res.next()) {
                    System.out.printf("No users in the table with id %i", id);
                } else {
                    balance = res.getInt("balance");
                }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
        }

        return balance;
    }

    /**
     * Insert randomized account data (ID, balance) using the JDBC
     * fast path for bulk inserts.  The fastest way to get data into
     * CockroachDB is the IMPORT statement.  However, if you must bulk
     * ingest from the application using INSERT statements, the best
     * option is the method shown here. It will require the following:
     *
     * 1. Add `rewriteBatchedInserts=true` to your JDBC connection
     *    settings (see the connection info in 'BasicExample.main').
     *
     * 2. Inserting in batches of 128 rows, as used inside this method
     *    (see BATCH_SIZE), since the PGJDBC driver's logic works best
     *    with powers of two, such that a batch of size 128 can be 6x
     *    faster than a batch of size 250.
     * @return The number of new accounts inserted (int)
     */
    public int bulkInsertRandomAccountData() {

        Random random = new Random();
        int BATCH_SIZE = 128;
        int totalNewAccounts = 0;

        try (Connection connection = ds.getConnection()) {

            // We're managing the commit lifecycle ourselves so we can
            // control the size of our batch inserts.
            connection.setAutoCommit(false);

            // In this example we are adding 500 rows to the database,
            // but it could be any number.  What's important is that
            // the batch size is 128.
            try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
                for (int i=0; i<=(500/BATCH_SIZE);i++) {
                    for (int j=0; j<BATCH_SIZE; j++) {
                        int id = random.nextInt(1000000000);
                        int balance = random.nextInt(1000000000);
                        pstmt.setInt(1, id);
                        pstmt.setInt(2, balance);
                        pstmt.addBatch();
                    }
                    int[] count = pstmt.executeBatch();
                    totalNewAccounts += count.length;
                    System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n    '%s'\n", pstmt.toString());
                    System.out.printf("    => %s row(s) updated in this batch\n", count.length);
                }
                connection.commit();
            } catch (SQLException e) {
                System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
                                  e.getSQLState(), e.getCause(), e.getMessage());
            }
        } catch (SQLException e) {
            System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n",
                              e.getSQLState(), e.getCause(), e.getMessage());
        }
        return totalNewAccounts;
    }

    /**
     * Read out a subset of accounts from the data store.
     *
     * @param limit (int)
     * @return Number of accounts read (int)
     */
    public int readAccounts(int limit) {
        return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit));
    }

    /**
     * Perform any necessary cleanup of the data store so it can be
     * used again.
     */
    public void tearDown() {
        runSQL("DROP TABLE accounts;");
    }
}

The output will look like the following:

BasicExampleDAO.createAccounts:
    'CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))'

BasicExampleDAO.updateAccounts:
    'INSERT INTO accounts (id, balance) VALUES (1, 1000)'

BasicExampleDAO.updateAccounts:
    'INSERT INTO accounts (id, balance) VALUES (2, 250)'
BasicExampleDAO.updateAccounts:
    => 2 total updated accounts
main:
    => Account balances at time '11:54:06.904':
    ID 1 => $1000
    ID 2 => $250

BasicExampleDAO.transferFunds:
    'UPSERT INTO accounts (id, balance) VALUES(1, ((SELECT balance FROM accounts WHERE id = 1) - 100)),(2, ((SELECT balance FROM accounts WHERE id = 2) + 100))'
BasicExampleDAO.transferFunds:
    => $100 transferred between accounts 1 and 2, 2 rows updated
main:
    => Account balances at time '11:54:06.985':
    ID 1 => $900
    ID 2 => $350

BasicExampleDAO.bulkInsertRandomAccountData:
    'INSERT INTO accounts (id, balance) VALUES (354685257, 158423397)'
    => 128 row(s) updated in this batch

BasicExampleDAO.bulkInsertRandomAccountData:
    'INSERT INTO accounts (id, balance) VALUES (206179866, 950590234)'
    => 128 row(s) updated in this batch

BasicExampleDAO.bulkInsertRandomAccountData:
    'INSERT INTO accounts (id, balance) VALUES (708995411, 892928833)'
    => 128 row(s) updated in this batch

BasicExampleDAO.bulkInsertRandomAccountData:
    'INSERT INTO accounts (id, balance) VALUES (500817884, 189050420)'
    => 128 row(s) updated in this batch

BasicExampleDAO.bulkInsertRandomAccountData:
    => finished, 512 total rows inserted

BasicExampleDAO.readAccounts:
    'SELECT id, balance FROM accounts LIMIT 10'
    id       =>          1
    balance  =>        900
    id       =>          2
    balance  =>        350
    id       =>     190756
    balance  =>  966414958
    id       =>    1002343
    balance  =>  243354081
    id       =>    1159751
    balance  =>   59745201
    id       =>    2193125
    balance  =>  346719279
    id       =>    2659707
    balance  =>  770266587
    id       =>    6819325
    balance  =>  511618834
    id       =>    9985390
    balance  =>  905049643
    id       =>   12256472
    balance  =>  913034434

BasicExampleDAO.tearDown:
    'DROP TABLE accounts'

Use IMPORT to read in large data sets

If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the IMPORT statement instead. It is much faster and more efficient than making a series of INSERTs and UPDATEs. It bypasses the SQL layer altogether and writes directly to the storage layer of the database.

For more information about importing data from Postgres, see Migrate from Postgres.

For more information about importing data from MySQL, see Migrate from MySQL.

Use rewriteBatchedInserts for increased speed

We strongly recommend setting rewriteBatchedInserts=true; we have seen 2-3x performance improvements with it enabled. From the JDBC connection parameters documentation:

This will change batch inserts from insert into foo (col1, col2, col3) values (1,2,3) into insert into foo (col1, col2, col3) values (1,2,3), (4,5,6) this provides 2-3x performance improvement

Use a batch size of 128

PGJDBC's batching support only works with powers of two, and will split batches of other sizes up into multiple sub-batches. This means that a batch of size 128 can be 6x faster than a batch of size 250.

The code snippet below shows a pattern for using a batch size of 128, and is taken from the longer example above (specifically, the BasicExampleDAO.bulkInsertRandomAccountData() method).

Specifically, it does the following:

  1. Turn off auto-commit so you can manage the transaction lifecycle and thus the size of the batch inserts.
  2. Given an overall update size of 500 rows (for example), split it into batches of size 128 and execute each batch in turn.
  3. Finally, commit the batches of statements you've just executed.
int BATCH_SIZE = 128;
connection.setAutoCommit(false);

try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) {
    for (int i=0; i<=(500/BATCH_SIZE);i++) {
        for (int j=0; j<BATCH_SIZE; j++) {
            int id = random.nextInt(1000000000);
            int balance = random.nextInt(1000000000);
            pstmt.setInt(1, id);
            pstmt.setInt(2, balance);
            pstmt.addBatch();
        }
        int[] count = pstmt.executeBatch();
        System.out.printf("    => %s row(s) updated in this batch\n", count.length); // Verifying 128 rows in the batch
    }
    connection.commit();
}

Retrieve large data sets in chunks using cursors

CockroachDB now supports the Postgres wire-protocol cursors for implicit transactions and explicit transactions executed to completion. This means the PGJDBC driver can use this protocol to stream queries with large result sets. This is much faster than paginating through results in SQL using LIMIT .. OFFSET.

For instructions showing how to use cursors in your Java code, see Getting results based on a cursor from the PGJDBC documentation.

Note that interleaved execution (partial execution of multiple statements within the same connection and transaction) is not supported when Statement.setFetchSize() is used.

What's next?

Read more about using the Java JDBC driver.

You might also be interested in the following pages:


Yes No
On this page

Yes No