An introduction to join ordering

An introduction to join ordering

The development of the relational model heralded a big step forward for the world of databases. A few years later, SQL introduced a rich vocabulary for data manipulation: filters, projections, and—most importantly—the mighty join. Joins meant that analysts could construct new reports without having to interact with those eggheads in engineering, but more importantly, the existence of complex join queries meant that theoreticians had an interesting new NP-hard problem to fawn over for the next five decades.

Ever since, the join has been the fundamental operation by which complex queries are constructed out of simpler "relations". The declarative nature of SQL means that users do not generally specify how their query is to be executed—it’s the job of a separate component of the database called the optimizer to figure that out. Since joins are so prevalent in such queries, the optimizer must take special care to handle them intelligently. As we'll see, this isn't a trivial task.

In this post, we'll look at why join ordering is so important and develop a sense of how to think of the problem space. And then, in upcoming posts, we'll begin discussing ways to implement a fast, reliable algorithm to produce good join orderings.

A Refresher on SQL and Joins

Let’s do a quick refresher in case you don’t work with SQL databases on a regular basis.

A relation or table is basically a spreadsheet. Say we have the following relations describing a simple retailer:

  • customers
  • products
  • orders

The customers, or C relation looks something like this:

customer_id customer_name customer_location
1 Joseph Norwalk, CA, USA
2 Adam Gothenburg, Sweden
3 William Stockholm, Sweden
4 Kevin Raleigh, NC, USA

the products, or P relation looks like this:

product_id product_location
123 Norwalk, CA, USA
789 Stockholm, Sweden
135 Toronto, ON, Canada

The orders, or O relation looks like this:

order_id order_product_id order_customer_id order_active
1 123 3 false
2 789 1 true
3 135 2 true

The cross product of the two relations, written P \times O, is a new relation which contains every pair of rows from the two input relations. Here’s what P \times O looks like:

product_id product_location order_id order_product_id order_customer_id order_active
123 Norwalk, CA, USA 1 123 3 false
123 Norwalk, CA, USA 2 789 1 true
123 Norwalk, CA, USA 3 135 2 true
789 Stockholm, Sweden 1 123 3 false
789 Stockholm, Sweden 2 789 1 true
789 Stockholm, Sweden 3 135 2 true
135 Toronto, ON, Canada 1 123 3 false
135 Toronto, ON, Canada 2 789 1 true
135 Toronto, ON, Canada 3 135 2 true

However, for most applications, this doesn’t have much meaning, which is where joins come into play.

A join is when we have a filter (or predicate) applied to the cross product of two relations. If we filter the above table to the rows where product_id = order_product_id, we say we’re "joining P and O on product_id = order_product_id". The result looks like this:

product_id product_location order_id order_product_id order_customer_id order_active
123 Norwalk, CA, USA 1 123 3 false
789 Stockholm, Sweden 2 789 1 true
135 Toronto, ON, Canada 3 135 2 true

Here we can see all of the orders that contained a given product.

We can then remove some of the columns from the output (this is called projection):

product_id order_customer_id
123 3
789 1
135 2

This ends up with a relation describing the products various users ordered. Through pretty basic operations, we built up some non-trivial meaning. This is why joins are such a major part of most query languages (primarily SQL): they’re very conceptually simple (a predicate applied to the cross product) but can express fairly complex operations.

You might have observed that even though the size of the cross product was quite large (|P| \times |O|), the final output was pretty small. Databases will exploit this fact to perform joins much more efficiently than by producing the entire cross product and then filtering it. This is part of why it’s often useful to think of a join as a single unit, rather than two composed operations.

Some Vocabulary

To make things easier to write, we’re going to introduce a little bit of notation.

We already saw that the cross product of A and B is written A \times B. Filtering a relation R on a predicate p is written \sigma_p( R ). That is, \sigma_p( R ) is the relation with every row of R for which p is true, for example, the rows where product_id = order_product_id. Thus a join of A and B on p could be written \sigma_p(A \times B). Since we often like to think of joins as single cohesive units, we can also write this as A \Join_p B.

The columns in a relation don’t need to have any particular order (we only care about their names), so we can take the cross product in any order. A \times B = B \times A, and further, A \Join_p B = B \Join_p A. You might know this as the commutative property. Joins are commutative.

We can "pull up" a filter through a cross product: \sigma_p(A) \times B = \sigma_p(A \times B). It doesn’t matter if we do the filtering before or after the product is taken. Because of this, it sometimes makes sense to think of a sequence of joins as a sequence of cross products which we filter at the very end:

(A \Join_p B) \Join_q C = \sigma_q( \sigma_p( A \times B ) \times C) = \sigma_{p\wedge q}(A \times B \times C)

Something that becomes clear when written in this form is that we can join A with B and then join the result of that with C, or we can join B with C and then join the result of that with A. The order in which we apply those joins doesn’t matter, as long as all the necessary filtering happens at some point. You might recognize this as the associative property. Joins are associative (with the asterisk that we need to pull up predicates where appropriate).

Optimizing Joins

So we can perform our joins in any order we please. This raises a question: is there some order that’s more preferable than another? Yes. It turns out that the order in which we perform our joins can result in dramatically different amounts of work required.

Consider a fairly natural query on the above relations, where we want to get a list of all customers’ names along with the location of each product they’ve ordered. In SQL we could write such a query like this:

SELECT customer_name, product_location FROM
  orders
  JOIN customers ON customer_id = order_customer_id
  JOIN products ON product_id = order_product_id

We have two predicates:

  • customer_id = order_customer_id
  • product_id = order_product_id

Say we first join products and customers. Since neither of the two predicates above relate products with customers, we have no choice but to form the entire cross products between them. This cross product might be very large (the number of customers times the number of products) and we have to compute the entire thing.

What if we instead first compute the join between orders and customers? The sub-join of orders joined with customers only has an entry for every order placed by a customer - probably much smaller than every pair of customer and product. Since we have a predicate between these two, we can compute the much smaller result of joining them and filtering directly (there are many algorithms to do this efficiently, the three most common being the hash join, merge join, and nested-loop/lookup join).

Visualizing the Problem

To better understand the structure of a join query, we can look at its query graph. The query graph of a query has a vertex for each relation being joined and an edge between any two relations for which there is a predicate.

the query graph for the query given above

Since a predicate filters the result of the cross product, predicates can be given a numeric value that describes how much they filter said result. This value is called their selectivity. The selectivity of a predicate p on A and B is defined as:

sel(p) = \dfrac{|A \Join_p B|}{|A \times B|} = \dfrac{|A \Join_p B|}{|A||B|}

In practice, we tend to think about this the other way around; we assume that we can estimate the selectivity of a predicate and use that to estimate the size of a join:

|A \Join_p B| = sel(p)|A||B|

So a predicate which filters out half of the rows has selectivity 0.5 and a predicate which only allows one row out of every hundred has selectivity 0.01. Since predicates which are more selective reduce the cardinality of their output more aggressively, a decent general principle is that we want to perform joins over predicates which are very selective first. It’s often assumed for convenience that all the selectivities are independent, that is,

|A \Join_p B \Join_q C| = sel(p)sel(q)|A \times B \times C|

Which, while indeed convenient, is rarely an accurate assumption in practice. Check out "How Good Are Query Optimizers, Really?" by Leis et al. for a detailed discussion of the problems with this assumption.

It turns out that the shape of a query graph plays a large part in how difficult it is to optimize a query. There are a handful of canonical archetypes of query graph "shapes", all with different optimization characteristics.

alt text

A “chain” query graph

alt text

A “star” query graph

alt text

A “clique” query graph

alt text

A “cycle” query graph

Note that these shapes aren’t necessarily representative of many real queries, but they represent extremes which exhibit interesting behaviour and which permit interesting analysis.

Query Plans

To visualize a particular join ordering, we can look at its query plan diagram. Since most join execution algorithms only perform joins on one pair of relations at a time, these are generally binary trees. The query plan we ended up with for the above query has a diagram that looks something like this:

alt text

There are also two main canonical query plan shapes, the less general "left-deep plan":

alt text

Where every relation is joined in sequence.

The more general form is the "bushy plan":

alt text

In a left-deep plan, one of the two relations being joined must always be a concrete table, rather than the output of a join. In a bushy plan, such composite inners are permitted.

Is this actually "hard"?

In the examples we’ve seen, there were only a handful of options, but as the number of tables being joined grows, the number of potential query plans grows extremely fast—and in fact, finding the optimal order in which to join a set of tables is NP-hard. This means that when faced with large join ordering problems, databases are generally forced to resort to a collection of heuristics to attempt to find a good execution plan (unless they want to spend more time optimizing than executing!).

I think it’s important to first answer the question of why we need to do this at all. Even if some join orderings are orders of magnitude better than others, why can’t we just find a good order once and then use that in the future? Why does a piece of software like a database that’s concerned with going fast need to solve an NP-hard problem every time it receives a query? It’s a fair question, and there’s probably interesting research to be done in sharing optimization work across queries.

The main answer, though, is that you’re going to want different join strategies for a query involving Justin Bieber’s twitter followers versus mine. The scale of various relations being joined will vary dramatically depending on the query parameters and the fact is that we just don’t know the problem we’re solving until we receive the query from the user, at which point the query optimizer will need to consult its statistics to make informed guesses about what join strategies will be good. Since these statistics will be very different for a query over Bieber’s followers, the decisions the optimizer ends up making will be different and we probably won’t be able to reuse a result from before.

Once you accept that you have to solve the problem, how do you do it? A common characteristic of NP-hard problems is that they’re strikingly non-local. Any type of local reasoning or optimization you attempt to apply to them will generally break down and doom you to look at the entire problem holistically. In the case of join ordering, what this means is that in most cases it’s difficult or impossible to make conclusive statements about how any given pair of relations should be joined - the answer can differ drastically depending on all the tables you don’t happen to be thinking about at this moment.

In our example with customers, orders, and products, it might look like our first plan was bad only because we first performed a join for which we had no predicate (such intermediate joins are just referred to as cross products), but in fact, there are joins for which the ordering that gives the smallest overall cost involves a cross product (exercise for the reader: find one).

Despite the fact that optimal plans can contain cross products, it’s very common for query optimizers to assume their inclusion won’t improve the quality of query plans that much, since disallowing them makes the space of query plans much smaller and can make finding decent plans much quicker. This assumption is sometimes called the connectivity heuristic (because it only considers joining relations which are connected in the query graph).

This post has mostly been about the vocabulary with which to speak and think about the problem of ordering joins, and hasn’t really touched on any concrete algorithms with which to find good query plans.

Join ordering is, generally, quite resistant to simplification. In the general case—and in fact, almost every case in practice—the problem of finding the optimal order in which to perform a join query is NP-hard. However, if we sufficiently restrict the set of queries we look at, and restrict ourselves to certain resulting query plans, there are some useful situations in which we can find an optimal solution. Those details, though, will come in a follow-up post.

Thanks to Andy Kimball for his technical review of this post.

If you like this post you can go even deeper on Join Ordering with Join Ordering Part II: The SQL

Does building distributed SQL engines put a spring in your step? If so, we're hiring! Check out our open positions here.

Keep Reading

How CockroachDB implemented consistent, distributed, incremental backup

Consistent, Distributed, Incremental Backup: Pick Three

Almost all widely used database systems include the ability to …

Read more
[podcast] Unscripted founders Q&A on CockroachDB 1.0

LISTEN ON: | SoundCloud |

Join the Cockroach Labs founders for an unscripted conversation about the dirty …

Read more
CockroachDB 1.0 is production-ready

Today, we are pleased to announce the release of CockroachDB 1.0, the first open source, cloud-native SQL …

Read more