Pages Navigation Menu

Team Defense: Product and Process

US Presswire

US Presswire


 
On his college basketball blog, Ken Pomeroy has been arguing, for quite some time, that a more descriptive measure of a team’s three-point defense is opponent attempts as opposed to opponent 3PT%. While his commentary has focused on three-point shooting the same point holds true, to a certain degree, with shots inside the arc. As first it may seem counter-intuitive for the most accurate description of how well a team defends certain areas of the the floor not to be field goal percentage. But if we talk the idea through it quickly seems less radical.

Although it may not feel like it at times, NBA players take good shots. In the vast majority of cases, if a shot has little or no chance to go in, an NBA player won’t take it unless the game situation somehow dictates that this is the best option. There are of course, stunningly obvious counter-examples – see Smith, J.R. – but when we applaud teams that take good shots, or chastise a player for poor ones, we are splitting hairs and ignoring the countless impossible and improbable shots that they passed up. In that context we can comfortably say that often, the best defended shots are the ones where the defense allowed such low-percentage openings that the offense recognized it wasn’t in their best interest to take them.

This idea represents a big and often overlooked measure of quality defense. We look at shooting percentages to evaluate defense, but that often offers precious little distinction. Looking at 386 team seasons, going back to 2000-2001, we find the average 3PT% to be 35.5%. The average for the top 38 teams from that time span was 38.3%. The average for the bottom 38 teams was  32.6%. That means the top and bottom 10% of teams were separated by just 5.7 percentage points. The other 310 teams from the last 13 seasons were separated by less than 5.7 percentage points in 3PT%. Another way we can demonstrate this is with variance, a statistical measure of how spread out the values are in a given data set. Over the past thirteen years the variance in team 3PT% has been 0.0003. The variance in the percentage of a team’s field goal attempts that are three-pointers is 0.0008, nearly four times the variation. There is a lot of difference in the frequency with which teams shot three-pointers, but not nearly as much difference in their accuracy.

This is a huge lesson for both offenses and defenses. Even if you are an average shooting team, you can make a huge difference for your offense just by taking good shots. The same thing is true defensively – you may only be able to depress an opponent’s shooting percentages so much, but you can force them to take difficult and inefficient shots. You may not be able to keep a team from shooting 40.0% on mid-range jumpers, but if you’ve kept your opponent away from the rim and left those mid-range jumpers as the only option, chances are you’ve done your job.

Last week I shared some statistical analysis I did with my shot-selection metric, Expected Points Per Shot (XPPS), estimating that about 19.4% of the variation in Offensive Rating can be explained by shot-selection. Using a variation on the technique from that post, borrowed from Evan Zamir, we can illustrate exactly how much of a difference controlling an opponent’s shot selection makes. We know from that analysis that TO%, ORB%, XPPS and Shot-Making Difference explain 97% of Offensive Rating, and in the inverse, Defensive Rating. From that regression analysis we also know that those factors explain efficiency with this equation: ORTG or DRTG = (XPPS * 88.55655) + (Shot-Making Difference * 84.41452) + (ORB% * 48.4174) + (TO% * -128.14) + 15.22669.

The first thing we do is create an imaginary team that is completely average with respect to each of those variables above – ORB%, TO%, XPPS and Shot-Making Difference. This completely average imaginary team would have a DRTG of 102.83. From here we can, one at a time, manipulate each variable, measuring the change in DRTG. What I did was begin by looking at FG% from each area of the floor. One at a time, I reduced the FG% from each area by one standard deviation, leaving the others at league average, measuring the change on Shot-Making Difference and the resulting Defensive Rating. I then did the same thing for the percentage of shot attempts from each area of the floor, measuring the change in XPPS and the resulting Defensive Rating. When I reduced the percentage of shot attempts from an area, I needed to replace those shot attempts in another area of the floor to make sure I still had 100% represented. By that I mean if I reduced the percentage of attempts that came at the rim, I had to redistribute that percentage of attempts to the other areas of the floor. I did this, not equally, but by using the relative percentages of the remaining areas. Here were the results:

[table id=75 /]

In the analysis that follows I’m going to repeatedly refer to controlling attempts and controlling accuracy. With each mention there is the implication that the last 13 years of NBA data provides an accurate representation of minimums, maximums and averages. Obviously, if you could force an opponent to miss every single corner-three attempt, accuracy would be the most important factor. But we must assume that teams are largely bound by the historical patterns which set our baseline expectations.

A move of one standard deviation in each area is roughly equivalent to going from league average to the top or bottom five in each category. From the results we see that the biggest difference a team defense can make is trips to the free throw line as a percentage of their opponents’ scoring opportunities. The second biggest impact is to reduce your opponents’ shooting percentages on shots at the rim. The third biggest impact has to do with the percentage of opponent’s shot attempts that come from the mid-range, and requires a little explanation.

To keep my method consistent I always reduced by one standard deviation, which in the case of mid-range jumpers actually makes your defensive efficiency worse, by nearly a point per 100 possessions. In the case of mid-range jumpers, a standard deviation worked out to 3.2% of shot attempts, or roughly 2.6 field goal attempts per game. Since mid-range jumpers are the least efficient scoring opportunity, you want to increase the percentage of your opponent’s shots that come from this area. So forcing an opponent to take an extra 2.6 field goal attempts per game, instead of shots from other locations would, on average, improve your Defensive Rating by 0.98 points per 100 possessions. For context, improving your team’s Defensive Rating 1.0 points per 100 possessions from league average what put their efficiency somewhere between the 10th ranked Atlanta Hawks and the 11th ranked Milwaukee Bucks. Diminishing your team’s Defensive Rating by 1.0 points per 100 possessions from league average would give them the same Defensive Efficiency as the Detroit Pistons and the Los Angeles Lakers, ranked 16th and 17th in the league.

In the case of mid-range jumpers we have somewhat of a statistical oddity, in that there is more opportunity to improve your defense by forcing opponents to take them, than there is in controlling accuracy. This all has to do with the trade-off in expected values between different locations. Even if you were playing against the best mid-range jump shooting team of the past 13 seasons, you’d still be better off, on average, with your opponent taking one of those shots than a three-pointer, shot at the rim, or a trip to the free throw line.

We find a similar effect when we look at corner-threes, although I was surprised that the effect wasn’t more pronounced. When I reduced that category by a standard deviation it actually made Defensive Rating worse. This is probably because my method redistributes those shots according to established patterns. So when I reduced corner-threes many of those scoring opportunities went to the rim or the free throw line, scoring opportunities that score more points on average than corner-threes, explaining the increase in Defensive Rating. Out of curiosity, I added the wild-card category which looks at what would happen if a team reduced their opponents’ corner-three attempts by one standard deviation and turned all of those shot attempts into mid-range jumpers. In this case there is about as much mileage to be gained by reducing attempts as there is by controlling accuracy.

Ultimately an effective defense controls most if not all of these factors we’ve looked at. They force tough shots, which also results in less accuracy. They control the number of attempts by forcing turnovers and rebounding their opponents’ misses. But the defensive metrics we typically rely on – FG% allowed, turnovers forced, rebounds corralled – are products, measure of output. To some degree they can be swayed by random occurrences and the impact of exceptional individual abilities. Just like looking at wins and losses can obscure the true level of team’s performance, looking only at these measures hides much about the way a team defends. By peeling back one layer and looking at shot selection allowed we can remove some of the random events and the outliers of talent to see some more detail about which teams understand and implement an efficient brand of defense.

  • Remi

    great stuff, it would be great to try this method with playoff’s D where I would assume that those mid range shots are more frequent and decisive in the outcomes

    also about tough shots, I wonder how “tough shots” could be measured,

    with the 3D crazy camera’s stat, do you think it’s possible to evaluate how a shot is taken in rythm or not for example (if the ball movement is tracked, it should be possible to get datas on the time lapse between the catch and the release or between the time the ball stop and the ball is released)

    thanks anyway, hope the pistons hires you

    • http://hickory-high.com/ Ian Levy

      Thanks for reading and commenting Remi. The potential of those SportVu cameras is literally endless. Catch and shoot time elapsed would be a great thing to look at. In the past I’ve seen that they shared some data about shooting percentages based on the number of dribbles taken before the shot. I was also at the Sloan Conference this weekend where there was a research paper presented about how both FG% and FGA at the rim were affected by the proximity of different players. Dwight Howard and Larry Sanders came out as the most impressive defenders, David Lee was the worst.

      Unfortunately, the data from those cameras is proprietary and closely protected. Unless your (and my) unlikely fantasy about getting hired by an NBA team comes true I probably won’t get a chance to work with any of the data.

      But again, thanks for the positive feedback, it’s much appreciated.

%d bloggers like this: