## The High-Performance SQL Blog

# Reuse Calculations in the Same Query with Lateral Joins

Reuse Calculations in the Same Query with Lateral Joins### Reusing Calculations

Reusing parts of computations has long been a wart in SQL. Let’s say you want to compute confidence intervals for your signup rate. You’ll have to use the standard error calculation twice: once for the upper bound, and once for the lower bound.

Your options are to just do the math twice, or to put the calculation in an ugly — and possibly inefficient — subquery.

This is where lateral joins shine. Postgres’s latest join type solves this problem by letting you make the calculation in one join, and then reference the results of that calculation in subsequent joins.

### Computed Columns with Duplication or Subqueries

Let’s start with our input data, a roll-up table of signups by month:

Now let’s compute the confidence intervals around our signup rate each month. The math is:

n = number of users

x = number of signups

p = probability of signup = (x / n)

se = standard error of p = sqrt((p * (1 - p)) / n)

confidence interval = p ± (1.96 * se)

If we just recalculate the signup rate and standard error each time, the naive SQL is:

select

month,

signups,

visitors,

(signups/visitors)asavg,

(signups/visitors)

-(1.96*sqrt((signups/visitors)

*(1-(signups/visitors))/visitors))aslow,

(signups/visitors)

+(1.96*sqrt((signups/visitors)

*(1-(signups/visitors))/visitors))ashigh

fromsignups_by_month

What a mess! This is way too hard to read, and we’re obviously duplicating a bunch of work in the * low* and

*calculations.*

**high**Nonetheless, the results are handy:

In Periscope, we can see those same results visually:

Let’s pick apart the issues in our naive SQL. We calculate the probability five times and the standard error twice. That’s way too much duplication for a simple calculation.

We can remove needless duplication by adding subqueries and having each layer build upon previous calculations:

select

signups,

visitors,

pasavg,

p-1.96*seaslow,

p+1.96*seashigh

from(

select

*,

sqrt(p*(1-p)/visitors)asse

from(

select

*,

signups/visitorsasp

fromsignups_by_month

) probability

) standard_error

This query is probably more efficient because it only does each stage of the calculation once. But despite removing all the duplication, it’s arguably even harder to read.

Until now, these were our only two options for a calculation like this. But as of Postgres 9.3, there’s a better way!

### Computed Columns with Lateral Joins

A lateral join is a join that allows subqueries in the right-hand-side of the join to reference columns defined in the left-hand-side of the join.

For our confidence interval calculation, we’ll define probability first, then reference probability when defining standard error, and then reference standard error when defining lower and upper bounds.

Here’s the new query:

select

month,

signups,

visitors,

pasavg,

low,

high

fromsignups_by_month,

lateral(selectsignups/visitorsasp) probability,

lateral(selectsqrt(p*(1-p)/visitors)asse) std_error,

lateral(selectp-1.96*seaslow) lower_bound,

lateral(selectp+1.96*seashigh) upper_bound

Careful readers will notice we’re using a cartesian join. This is a join of the form * from a, b, c*, which is shorthand for

*. It creates a row for every possible combination of rows in the joined table. In our case, where each joined table computes a single value, this just has the effect of appending calculations and then reusing them for the next calculation.*

**from a join b on true join c on true**The final version is almost as easy to read as a mathematical formula! We simply define a variable, use it in the next calculation, rinse and repeat.

All in all, a much cleaner, easier query. Thanks Postgres!