Connection Pooling

On this page Carat arrow pointing down
Warning:
CockroachDB v21.1 is no longer supported. For more details, see the Release Support Policy.

This page has information on planning, configuring, and using connection pools when using drivers or frameworks with CockroachDB.

About connection pools

A typical database operation consists of several steps.

  1. The driver uses a configuration to start a connection to the database server.
  2. A network socket is opened on the client that connects to the database server.
  3. Data is read or written through the network socket.
  4. The connection is closed down.
  5. The network socket is closed down and its resources are freed.

For simple database operations, these steps are not expensive, but as an application scales up, the performance of the application will suffer as each connection is created and destroyed. One pattern for improving performance is a connection pool, a group of already configured and established network connections between the client and database that can be reused for data operations within an application.

Each time an application reads or writes data, it will request one of the connections in the pool. After the data operation completes, the connection is returned to the pool so other operations can use it.

Connection pooling can be a enabled as a feature of the driver, a separate library used in conjunction with a driver, a feature of an application server, or a proxy server that acts as a gateway to the database server.

Sizing connection pools

Idle connections in CockroachDB do not consume many resources compared to PostgreSQL. Cockroach Labs estimates the memory overhead of idle connections in CockroachDB is 20 kB to 30 kB per connection.

Creating the appropriate size pool of connections is critical to gaining maximum performance in an application. Too few connections in the pool will result in high latency as each operation waits for a connection to open up. But adding too many connections to the pool can also result in high latency as each connection thread is being run in parallel by the system. The time it takes for many threads to complete in parallel is typically higher than the time it takes a smaller number of threads to run sequentially.

Each processor core can only execute one thread at a time. When there are more threads than processor cores, the system will use context switching to time-slice the thread execution. For example, if you have a system with a single core and two threads, processing threads 1 and 2 in parallel results in the system context switching to pause execution of thread 1 and begin executing thread 2, and then pause execution of thread 2 to resume executing thread 1. Executing thread 1 completely and then executing thread 2 will be faster because the system doesn't need to context switch, even though thread 2 had to wait until thread 1 fully completed to begin executing.

Storage and network performance also will affect the ability of a thread to fully execute. If a thread is blocked by network or storage latency, adding connections to the pool is a good idea so other threads can execute while the original thread is being blocked.

Cockroach Labs performed lab testing of various customer workloads and found no improvement in scalability beyond:

connections = (number of cores * 4)

Many workloads perform best when the maximum number of active connections is between 2 and 4 times the number of CPU cores in the cluster.

If you have a large number of services connecting to the same cluster, make sure the number of concurrent active connections across all the services does not exceed this recommendation by a large amount. If each service has its own connection pool, then you should make sure the sum of all the pool sizes is close to our maximum connections recommendation. Each workload and application is different, so you should conduct testing to determine the best-performing pool sizes for each service in your architecture.

In addition to setting a maximum connection pool size, set the maximum number of idle connections if possible. Cockroach Labs recommends setting the maximum number of idle connections to the maximum pool size. While this uses more memory, it allows many connections when concurrency is high without having to create a new connection for every new operation.

Validating connections in a pool

After a connection pool initializes connections to CockroachDB clusters, those connections can occasionally break. This could be due to changes in the cluster topography, or rolling upgrades and restarts, or network disruptions. CockroachDB Cloud clusters periodically are restarted for minor version updates, for example, so previously established connections would be invalid after the restart.

Validating connections is typically handled automatically by the connection pool. For example, in HikariCP the connection is validated whenever you request a connection from the pool, and the keepaliveTime property allows you to configure an interval to periodically check if the connections in the pool are valid. Whatever connection pool you use, make sure connection validation is enabled when running your application.

Example

In this example, a Java application similar to the basic JDBC example uses the PostgreSQL JDBC driver and HikariCP as the connection pool layer to connect to a CockroachDB cluster. The database is being run on 10 cores across the cluster.

Using the connection pool formula above:

connections = (10 [processor cores] * 4)

The connection pool size should be 40.

icon/buttons/copy
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://localhost:26257/bank");
config.setUsername("maxroach");
config.setPassword("password");
config.addDataSourceProperty("ssl", "true");
config.addDataSourceProperty("sslmode", "require");
config.addDataSourceProperty("reWriteBatchedInserts", "true");
config.setAutoCommit(false);
config.setMaximumPoolSize(40);
config.setKeepaliveTime(150000);

HikariDataSource ds = new HikariDataSource(config);

Connection conn = ds.getConnection();

In this example, a Go application similar to the basic pgx example uses the pgxpool library to create a connection pool on a CockroachDB cluster. The database is being run on 10 cores across the cluster.

Using the connection pool formula above:

connections = (10 [processor cores] * 4)

The connection pool size should be 40.

// Set connection pool configuration, with maximum connection pool size.
config, err := pgxpool.ParseConfig("postgres://max:roach@127.0.0.1:26257/bank?sslmode=require&pool_max_conns=40")
    if err != nil {
        log.Fatal("error configuring the database: ", err)
    }

// Create a connection pool to the "bank" database.
dbpool, err := pgxpool.ConnectConfig(context.Background(), config)
if err != nil {
    log.Fatal("error connecting to the database: ", err)
}
defer dbpool.Close()

This example uses the pool_max_conns parameter to set the maximum number of connections in the connection pool to 30.

For a full list of connection pool configuration parameters for pgxpool, see the pgxpool documentation.


Yes No
On this page

Yes No