Offensive and Defensive Variance
“Increase the variance” – Daryl Morey
The quote above was Morey’s answer in a Reddit AMA last year during the playoffs to a question about what changes his Rockets needed to make to prevail in their matchup with the Thunder. It’s an answer and an idea that have really stuck with me.
A theme that has cropped up in my analysis a lot lately is how misleading averages can be. Many of the most commonly used basketball statistics are averages – field goal percentages, per 100 possession efficiency measures, even per game and per 36 minute statistics. But averages combine values above and below, creating a numeric shorthand to sum up all the different data points in the neatest and tidiest way possible.
When we talk about variance we’re adding context to an average, measuring the spread between the highs and lows of the data. Circling back to Morey’s quote, what he was saying (I assume) was that for his team to have a chance they needed to increase the variance of their offensive and defensive efficiency and hope that they could push their highs high enough to catch a break and get past the Thunder’s superior talent. And that’s exactly where variance plays a role in wins and losses. If every team performed exactly to their averages the better team would win every single game. Upsets happen when those peaks and valleys overlap in unexpected ways.
Using the game logs from every team for last season we can actually look at not just the offensive and defensive efficiency from each team but also how much variance each displayed in their performance on each side of the ball. But, before we go any further I need to explain a twist of language. Variance is an actual mathematical term with an attached equation. However, when I use the word variance throughout the rest of this piece I’ll be referring to standard deviations, a related measurement that I think does a better job of illustrating the variations. I apologize to my high school math teachers for the imprecise use of terminology.
The visualization below shows each team’s performance from last season, marked by their Offensive Rating (points per 100 possessions, ORTG) and by the standard deviation in their ORTG (a bigger standard deviation means more variance). The tab at the bottom will let you switch over and see the same display, but with each team’s Defensive Rating (DRTG).
With this visualization we can begin to see some separation between teams with comparable levels of average efficiency but very different levels of variance. For example, the aforementioned Houston Rockets and the Denver Nuggets had very similar levels of offensive efficiency last season but the standard deviation in the Rockets’ ORTG was about 3 points per 100 possessions greater, meaning they were much more likely to drastically exceed or undershoot their season average on any given night. We see a similar example on defense where the Dallas Mavericks and Brooklyn Nets had almost identical DRTGs but standard deviations that differed by well over 3 points per 100 possessions.
Now that we’ve identified variance on both sides of the ball, the question becomes why is it there? What elements create variance?
Using the collected data I ran a series of correlations between different variables and each team’s offensive and defensive variance. These are between seasonlong numbers, not a gamebygame comparison. Here’s what I found:
The strongest relationship was between the percentage of a team’s shots that were threepointers and offensive variance. This makes sense intuitively and numerically. Because of their distance from the hoop, threepoint shots are less likely to go in then shots from anywhere else on the floor. However, that level of difficulty is rewarded with an extra point if the shot goes in (this is as simple an example of variance as I can think of). Turnovers, on both sides of the ball, also have strong relationships with variance as do the percentage of a team’s shots that are midrange jumpers.
Interestingly though, the relationship between variance and pace was relatively small. It showed up on defense where teams that played at a slower pace were slightly more likely to be consistent defensively. But on the other side of the ball there was no statistically meaningful relationship – teams that played at a faster pace were essentially no more or less likely to have a lot of offensive variance.
I find this fascinating because pace and variance are often closely linked in these sorts of discussions. For years the idea persisted that reducing pace was the way to instill variance in a game – the fewer possessions there are in a game the more likely the outcome is to be influenced by a handful of random couldgoeitherway events and the less likely the game is to be decided by the static margin in talent between the two teams. But over the past two years we’ve seen several teams (including last year’s Rockets and this year’s Sixers) chase variance in the opposite direction by relentlessly pushing the pace and trying to capitalize on the ensuing chaos with superior conditioning and athleticism. Obviously I’m working with a very small data set here, just a single season, and drawing conclusions for the league as a whole instead of individual teams, but the relationship between pace and variance seems to be much smaller then I would have anticipated.
Introducing variance into the context of offensive and defensive effectiveness raises plenty more questions. Obviously, being consistently bad is trouble on either side of the ball. But assuming a team is relatively effective, is there an advantage to having a certain level of performance variation?
The answer is probably not. More variance means more good AND bad to work with, and the ultimate goal is still to be as efficient as possible, as often as possible. There were small relationships between variance and efficiency, both on offense (a correlation of 0.110) and on defense (a correlation of 0.276). There was also a small relationship between total variance (the sum of ORTG and DRTG standard deviations) and Net Rating (0.250). That means teams with a positive Net Rating had, on average, slightly less overall variance. But again, by comparing to Net Rating we are looking at an average measure of performance.
The one other place I looked was the difference between a team’s actual win total and their Pythagorean Win Total (a projection of how many games a team should win based on their Net Rating). This formula has shown to be a very accurate method of projection but every years teams under and over perform these projections by slight margins. I thought perhaps variance might help explain some of those differences.
But again, nothing conclusive. The difference between a team’s actual win total and their Pythagorean Win Total had a 0.188 correlation with offensive variance, 0.070 with defensive variance and 0.72 with total variance. In the end this just reinforces that as a longterm strategy variance leads to chaos, which generally leads to losses. But in the set scenario of a single game, or small handful of games like in the playoffs, variance (intentional or otherwise) is often what dictates the final outcome.

Andrew Johnson

tongonation