Effects of Shot Location, and How Defenses Might be Changing

The advent of viewing basketball probabilistically, fundamentally changed how the game is managed. Basketball, to the extent that it was considered scientifically at all, used to be a function of physics — of arcs, gravity, and momentum.

The realization that basketball could be viewed in terms of probabilities — in terms of the odds that a shot goes in, odds that a player shoots a given percentage, odds that a team will win — is something we take for granted, now, since that’s what’s driven the game for almost 10 years now, but it’s not an insignificant realization.

Of course, hand in hand with this is the dawn of the use of Expected Value to define shot selection. From the MoreyBall basics of “a three pointer with a probability of 28% of success is still more valuable than a league average 2-point jumpshot,” to Ian Levy’s (of Nylon Calculus) Expected Points per Shot; we’ve used probability to define and quantify the efficacy of shot selection — and the skill of teams, even — for a long time now.

Teams have adjusted, too, and are now trying to shoot more threes and less midrange jumpers, just as a way of adjusting to basic tenets of probability. As a league, we often consider how progressive the NBA is, but the league is still in the process of making the shift from being a game of physics to a game of odds, no matter how far it feels like the league has come in terms of quantifying sport performance.

The application of those probabilistic principles have, though, so far, resulted in better performance for those teams, on average.

People have been correlating the change in shot selection with success since 2004, but as an update, I did an analysis of how the different shot locations have affected an offense recently, as in, over the last three seasons.

ORTG_Close ORTG_Mid ORTG_3

The percentage of a team’s points¹ that came from the paint didn’t seem to have an effect on a team’s offense, though I imagine that’s because a lot of different kinds of shots can come from the paint.

For example, uncontested shots at the rim are the best shots in the game, while a floater is one of the worst.

There are a lot of shots in “shots from the paint” that can muddle that correlation between “close” shots and offense, but I think it’s important to look at attempts this way.

After all, players can more or less choose to take 3’s or mid-range jump shots whenever they want (though they might be contested), but they can’t simply get a shot at the rim whenever they want.

So, if we’re considering a player’s decision making, the decision to try for a shot at the rim could end in a contested, difficult floater just as easily as it can end in a layup. The result is that the decision to try a shot at the rim doesn’t necessarily correlate with team success on offense.

More telling, though, is that there’s a decided negative correlation between shots from the midrange and offensive efficiency and a decided positive correlation between 3’s and offense. It’s further proof of what we know; in general, MoreyBall works.

Some of the trends within those correlations, particularly in the correlation between mid-range shots and offensive efficiency, are worth discussing, in large part because they’re trends that have developed over the last few seasons that we might not be fully aware of².

We know that more mid-range attempts has a negative correlation with offense, because there’s a smaller expected value when you shoot from mid-range. You may notice, though, that the correlation is not strictly linear.

The negative trend levels out bit with the majority of the data points, and then becomes more steep both with fewer and greater percentages of points from mid.

There’s something to be gleaned, here.

I’m going to ignore the inflection point in the data that occurs before roughly 13.1% of points scored come via midrange, because almost all of those data points are the Houston Rockets, and it’s unclear from just 5 or 6 data points whether or not there’s actually an exponential relationship between having drastically lower than average reliance on mid-range shots and a team’s offense.

There does, though, seem to be a clear exponentially negative relationship between more impact from mid-range and a worse offense.

Meaning, more mid-range shots doesn’t simply mean that you have a worse offense. For example, a team with noticeably smaller than average reliance on the midrange — say, the Mavericks last season — will not be that much better off than a team that’s at roughly the league average — say, the Portland Trail Blazers last season.

However, a team with way greater than average reliance from mid (Boston, Detroit, Washington) will be facing a much greater drag on the offense than the boon that Dallas would gain.

Which brings me back to my hypothesis from my debut piece on Analytics Game — spawned from an expected value analysis — that a team that’s more flexible offensively, especially from the midrange, would be better against elite defenses.

This could, theoretically, explain some of the exponential relationship. A team like Portand might not be as poorly off relative to teams with a below average reliance on the midrange as teams like the Philadelphia 76ers, in part, because Portland might have the weapons to be more flexible against top defenses.

This, however, does not appear to be the case on the face of it. I ran regressions on each team’s reliance on shots from each zone against how those teams performed against the 5-best defenses of their season, going back three seasons.

There was, for every zone, no relationship between what shot a team relied on and how well they performed against elite defenses.

EliteDDif_Close EliteDDiff_Mid EliteDDiff_3

There is a bit of a rub, though, namely, that defenses are changing rapidly. The most elite defenses in 2012 didn’t look exactly like the most elite defenses now (except the Chicago Bulls, probably).

What will and won’t work against those defenses, then, is changing too, so it would be hard to find one factor that does or doesn’t affect a team’s performance against those defenses when you go back multiple years.

Shot location didn’t seem to impact a team’s performance against elite defenses in any individual year either, though, except, tellingly, 2013-2014³.

14EffDiffvsEliteD_Close 14EliteDDiff_Mid

14EliteDDiff_3

That all three of these correlations are parabolic is really, really telling regarding how modern defenses are adjusting to the changing offensive philosophies, I think.

Look, none of these correlations are strong, I get that. For one, the fact that “Elite D Differential” is gathered from, at max, 20 games, and at minimum, 10 games, is important. Who knows if that number, in and of itself, is even significant?

Still, all of the correlations follow roughly the same pattern of, “teams do exponentially better as they have more of X shot in their arsenal, until all of a sudden, they’re doing worse.”

I think that this fits what we think we know about NBA defenses. Elite defenses have been really good at stopping teams that do one or two things well. So, teams that aren’t as proficient from a certain zone, and teams that over-rely on a certain shot, do worse than teams who are average.

Or, put more simply, teams that are flexible, in terms of where they can shoot from successfully, appear to do marginally better than teams that are not. And, just as importantly, this was not actually the case even one season ago.

In another study, I found that having the ability to run a greater number of plays wasn’t correlated at all with performance against elite defenses. It might be the case that having players who can turn a fewer number of plays into a greater number of effective shots has an effect on a team’s ability to hold its own against elite D.

This will be an interesting trend to watch over the next few years, and see whether or not any of these results were actually significant.

***

¹Percentage of a Team’s Points from “X” region includes both a team’s volume of shots and their performance therein, which will dilute the results of the correlations a bit because of muddled causality, but will be a more accurate representation of a team’s “ability” from any given region.

²For the same reason that the specifics of these findings may be interesting, they may be a little suspect. The correlation is highly statistically significant over 3 seasons, — or 90 teams’ worth of data — per the standard hypothesis test, but there’s only about an 89% confidence that the difference between a standard linear correlation and the correlation actually found between midrange attempts and offense is significant.

³Reducing the data to one year is just as problematic as using 3 season’s worth, since data from one season is not statistically significant with an R^2 of about .12 or less, but it felt worthy of inclusion.

Quantcast