Advanced Changefeed Configuration

On this page Carat arrow pointing down

The configurations and settings explained on this page will significantly impact a changefeed's behavior.

The following sections describe settings, configurations, and details to tune changefeeds for these use cases:

Some options for the kafka_sink_config and webhook_sink_config parameters are discussed on this page. However, for more information on specific tuning for Kafka and Webhook sinks, refer to the following pages:

Tuning for high durability delivery

When designing a system that relies on high durability message delivery—that is, not missing any message acknowledgement at the downstream sink—consider the following settings and configuration in this section:

Before tuning these settings we recommend reading details on our changefeed at-least-once-delivery guarantee.

Pausing changefeeds and garbage collection

By default, protected timestamps will protect changefeed data from garbage collection up to the time of the checkpoint. Protected timestamps will protect changefeed data from garbage collection if the downstream changefeed sink is unavailable until you either cancel the changefeed or the sink becomes available once again.

However, if the changefeed lags too far behind, the protected changes could lead to an accumulation of garbage. This could result in increased disk usage and degraded performance for some workloads.

For more detail on changefeeds and protected timestamps, refer to Garbage collection and changefeeds.

Create changefeeds with the following options so that your changefeed protects data when it is paused:

Defining Kafka message acknowledgment

To determine what a successful write to Kafka is, you can configure the kafka_sink_config option. The 'RequiredAcks' field specifies what a successful write to Kafka is. CockroachDB guarantees at least once delivery of messages—the 'RequiredAcks' value defines the delivery.

For high durability delivery, Cockroach Labs recommends setting:

kafka_sink_config='{'RequiredAcks': 'ALL'}'

ALL provides the highest consistency level. A quorum of Kafka brokers that have committed the message must be reached before the leader can acknowledge the write.


You must also set acks to ALL in your server-side Kafka configuration for this to provide high durability delivery.

Choosing changefeed sinks

Use Kafka or cloud storage sinks when tuning for high durability delivery in changefeeds. Both Kafka and cloud storage sinks offer built-in advanced protocols, whereas the webhook sink, while flexible, requires an understanding of how messages are acknowledged and committed by the particular system used for the webhook in order to ensure the durability of message delivery.

Defining schema change behavior

Ensure that data is ingested downstream in its new format after a schema change by using the schema_change_events and schema_schange_policy options. For example, setting schema_change_events=column_changes and schema_change_policy=stop will trigger an error to the cockroach.log file on a schema change and the changefeed to fail.

Tuning for high throughput

When designing a system that needs to emit a lot of changefeed messages, whether it be steady traffic or a burst in traffic, consider the following settings and configuration in this section:

Setting the resolved option

When a changefeed emits a resolved message, it force flushes all outstanding messages that have buffered, which will diminish your changefeed's throughput while the flush completes. Therefore, if you are aiming for higher throughput, we suggest setting the duration higher (e.g., 10 minutes), or not using the resolved option.

If you are setting the resolved option when you are aiming for high throughput, you must also consider the min_checkpoint_frequency option, which defaults to 30s. This option controls how often nodes flush their progress to the coordinating changefeed node. As a result, resolved messages will not be emitted more frequently than the configured min_checkpoint_frequency. Set this option to at least as long as your resolved option duration.

Batching and buffering messages

  • Batch messages to your sink:
  • Set the changefeed.memory.per_changefeed_limit cluster setting to a higher limit to give more memory for buffering changefeed data. This setting influences how often the changefeed will flush buffered messages. This is useful during heavy traffic.

Configuring file and message format

  • Use avro as the emitted message format option with Kafka sinks; JSON encoding can potentially create a slowdown.


  • Use the compression option when you create a changefeed emitting data files to a cloud storage sink. For larger files, set compression to the zstd format.
  • Use the snappy compression format to emit messages to a Kafka sink. If you're intending to do large batching for Kafka, use the lz4 compression format.

File size

To configure changefeeds emitting to cloud storage sinks for high throughput, you should consider:

  • Increasing the file_size parameter to control the size of the files that the changefeed sends to the sink. The default is 16MB. To configure for high throughput, we recommend 32MB128MB. Note that this is not a hard limit, and a changefeed will flush the file when it reaches the specified size.
  • When you compress a file, it will contain many more events.
  • File size is also dependent on what kind of data the changefeed job is writing. For example, large JSON blobs will quickly fill up the file_size value compared to small rows.
  • When you change or increase file_size, ensure that you adjust the changefeed.memory.per_changefeed_limit cluster setting, which has a default of 512MiB. Buffering messages can quickly reach this limit if you have increased the file size.

Configuring for tables with many ranges

If you have a table with 10,000 or more ranges, you should consider increasing the following two cluster settings. We strongly recommend increasing these settings slowly. That is, increase the setting and then monitor its impact before adjusting further:

  • kv.rangefeed.catchup_scan_concurrency: The number of catchups a rangefeed can execute concurrently. The default is 8.
  • kv.rangefeed.concurrent_catchup_iterators: The number of rangefeed catchup iterators a store will allow concurrently before queuing. The default is 16.

Adjusting concurrent changefeed work

See also

Yes No
On this page

Yes No