No time to live: James Bond explains row-level time to live

No time to live: James Bond explains row-level time to live

CockroachDB Serverless

Taking the work out of your workloads since 2021

Start instantly

Do you know what time it is? Time to die.

-Nomi

That’s it. That’s row-level TTL in a James Bond nutshell. Which is appropriate given that today is James Bond Day.

You probably don’t need to read further than that one quote from Nomi (possibly the next 007?). Because you get it now. But row-level TTL has been one of our most requested features dating back to 2017. It’s rare for issues on our github to get this many votes. So I hope you’ll forgive me for indulging in this metaphor a bit longer.

Github issue for row-level time to live

Developers have been asking for this feature with such gusto for so long because they’re tired of writing custom app code for deletion logic. And now they don’t have to.

The past isn’t dead, until you say it is

Row-level TTL is a mechanism in which rows from a table are considered “expired” and will be automatically deleted once those rows have been stored longer than a specified expiration time. As of CockroachDB 22.1 row-level TTL is in preview mode in CockroachDB

The TL;DR for row-level TTL is that it will make your life easier by automating the process for getting rid of expired data. For example, you can set ‘shopping cart’ items to expire after a week, and with row-level TTL the database will clean them up for you automatically rather than “requiring” you to write a script to delete them. Or, you might need to comply with certain regulations that say something to the effect of ‘data can’t be held for people for more than x years’. Now you can use a TTL rule to make sure the data gets deleted on time.  

History isn’t kind to men who play God, but sometimes you should

Without row-level TTL, the DIY process for expiring data typically looks something like this:

  1. You create an index on the column containing the timestamp which you’ll use to expire the rows;
  2.  You periodically run (via cron, perhaps) a script which will DELETE FROM table_t1 WHERE now() - timestamp_column > interval '30 days' LIMIT 1000;.  

That LIMIT is there to prevent the script from becoming too onerous. You just keep running this until it fails to delete rows, then you stop and run it again next time you have to. Or, perhaps, you just keep running this — depends on your data volumes.  That “1000" is empirically determined; it could be “10000” or something else. 

It’s not an insurmountably complex process. But certainly a process we’re thrilled to automate because of the time it saves developers and the compliance problems it helps you avoid. One sneaky bit of complexity is the need to balance the timeliness of the deletions vs. the potentially negative performance impact of those deletions on foreground traffic from your application. These “row expiration” tasks aren’t critical at all, so it’s better if they can be done at a lower priority, to minimize impact to the usual flow of transactions.

Use cases for row-level TTL include:

  • Delete inactive data events to manage data size and performance: For example, you may want to delete order records from an online store after 90 days.
  • Delete data no longer needed for compliance: For example, a banking application may need to keep some subset of data for a period of time due to financial regulations. Row-level TTL can be used to remove data older than that period on a rolling, continuous basis.
  • Outbox pattern: When events are written to an outbox table and published to an external system like Kafka, those events must be deleted to prevent unbounded growth in the size of the outbox table.

How row-level TTL works

At a high level, row-level TTL works by:

  • Issuing a selection query at a historical timestamp, yielding a set of rows that are eligible for deletion (also known as “expired”).
  • Issuing batched DELETE statements for the expired rows.
  • As part of the above process, deciding how many rows to SELECT and DELETE at once in each of the above queries.
  • Running the SQL queries described above in parallel as background jobs.
  • To minimize the performance impact on foreground application queries, the background deletion queries are rate limited; they are also submitted at a lower priority level using the admission control system.

The process above is conceptually similar to the process described by Batch delete on an indexed column. This difference is that row-level TTL is built into CockroachDB, so it saves you from having to write code to manage the process from your application and/or external job processing framework, including tuning the rate and performance of your background queries so they don’t affect foreground application query performance.

When are rows deleted?

Once rows are expired (that is, are older than the specified TTL interval), they are eligible to be deleted. However, eligible rows may not be deleted right away. Instead, they are scheduled for deletion using a background job that is run at the interval defined by the ttl_job_cron storage parameter.

What else is new in CockroachDB?

Recently, we announced that the serverless version of CockroachDB is now generally available. We also introduced new integrations with Vercel and Hashicorp. And we built a new tool for making migrations faster and easier. It’s called MOLT (Migrate Off Legacy Technology).

About the author

Dan Kelly linkedin link

Dan has been producing technical blogs, videos, whitepapers, and webinars at Cockroach Labs for the last 4+ years. His mission is to identify the most pressing problems developers face and to create content to solve them. Outside of work Dan is usually chasing his three year old daughter or cooking something indulgent.

Keep Reading

Raft is so fetch: The Raft Consensus Algorithm explained through "Mean Girls"

Raise your hand if you’ve ever been personally victimized by the Raft Consensus Algorithm.

via GIPHY

Understanding Raft …

Read more
How to build a serverless polling application

Very few traditions can melt the corporate ice as well as an ugly holiday sweater contest. Which is why, this past …

Read more
Full text search with CockroachDB and Elasticsearch

Full text indexing and search is such a common feature of applications these days. Users expect to be able to find a …

Read more