Pages Navigation Menu

Playing With #NBARank

US Presswire

US Presswire

On Wednesday ESPN finished releasing the results of the 2013 iteration of their NBA Rank project. This was the third annual edition with the same basic framework intact – hundreds of ESPN-affiliated writers and analysts (including myself and the rest of the TrueHoop Network) are asked to rate 500 NBA players on a scale of 1-10. Those ratings are collected and averaged, providing a system for ranking every player likely to play in the NBA this season. The one twist this year was that voters were asked to rate players based on “the overall level of play for each player for the upcoming NBA season.” This would theoretically lead to voters adjusting their opinions to include injury concerns and similar things which might limit a player’s minutes.

LeBron James, in the surprise of the century, was revealed as the top-ranked player. There were plenty of surprises along the way but I always feel that the most interesting part of this project is not what it says about the players, but what it says about us, the voting block, and how view the league and its players. The last two years I’ve done some analysis after the entire results have been revealed. Two years ago I looked at how the project handled the separation of talent among the league’s best players. Last season I looked a little at how we handled (and undervalued) the incoming rookie class and how NBA Rank compared to rankings set up by other statistical metrics. I have a few different things to explore this year.


One of the really nice things ESPN did this year was track not just the average rating given to each player by the voters, but also the standard deviation of that ranking. This is a measure of how wide the range of votes was on each player, highlighting those players about whom we, as a group, disagreed most. I’ve put together a Tableau Visualization that includes each player’s rank, rating and the standard deviation of their rating. The standard deviation is represented by the color of each player’s bar, the darker the shade of green the more disagreement there was about that player. You can use the filters at the bottom to narrow the focus by rank, rating or standard deviation. You can also use the player or team filters on the side to focus in on specific comparisons.

Unsurprisingly, there was very little disagreement at the very top. I was a little surprised however to see that there was so much fluctuation at the very bottom. Although I can imagine that some of that variation could be from voters applying less time to analyze the potential production of Donald Sloan or Dexter Pittman. While most of the variation was clustered toward the bottom, there were a few players at the top who really drew out a wide range of opinions. Among the top 100 players ranked, here are the 10 who had the largest variations in their votes.

Screen Shot 2013-10-25 at 7.41.16 AM

It’s an interesting mix of players. There are some are veterans winding down their careers and the disagreement is likely over how quickly that process is happening. Some are young players creating dissonance with the relationship between current production and possible potential. There are also players who will find themselves in new situations this season, with the same team or a new one, which creates different perceptions of their value. If you’re curious, Kobe Bryant, who finished 25th in NBA Rank this year with a rating of 7.78, had a standard deviation of 1.223 in his votes, well below the players in this group.

If you’re wondering which player the group agreed upon the most, well that would be LeBron James. His first place, 10.0 ranking came with a standard deviation of 0. Which means that every single voter gave him a perfect 10. He may not have been a unanimous choice for MVP, but he was unanimous here.

Offense and Defense

One of the complaints I heard a lot about this year’s NBA Rank results was that, by putting James Harden, Stephen Curry and Kyrie Irving in the top 10, the voters had largely ignored defensive deficiencies and focused on offense. It’s a legitimate issue given that the impact of each one of those players is decidedly unilateral. However the numbers don’t necessarily bear it out when you look at the results in their entirety.

The results that we the voters get to see also include the outputs of several correlations run between the ratings and different statistics. Oddly enough, the strongest of those correlations was with points per game, coming in at 0.770. However, if we eliminate the ratings of players who didn’t actually play last season (rookies, Derrick Rose, Andrew Bynum) and drop four highly-rated players who missed significant time with injury (Danny Granger, Rajon Rondo, Kevin Love, Eric Gordon) the correlation between the NBA Rank ratings and Win Shares jumps to the top, at 0.860.

Win Shares is actually the combination of separate calculations for offensive and defensive contributions. If we run separate correlations for the NBA Rank ratings against Offensive Win Shares and Defensive Win Shares, we actually see that the defensive side of the ball has a slightly stronger connection, 0.775 to 0.769.

Alphabetical Order

One of the things I really noticed as a voter this year was how mentally exhausting it was to try and rank 500 players. The survey is sent out to us in two, alphabetically-ordered 250-player sections. But they can’t be saved and each needs to be completed in a single shot. After rating 100 players, many of whom have only played a handful of actual NBA minutes, it can be difficult to summon the intellectual endurance to dig into the available evidence and try to arrive at a fair assessment of the potential of the next 150. This year I actually found myself feeling frustrated and angry with some of the players appearing at the end of the survey.

“I’ve just rated 249 players and now I have to offer an insightful opinion on Solomon Jones? Really?”

Reflecting back on my answers I had the sense that I may have been actually punishing some of those fringe players at the end of each section. Taking out my frustration on them for being bad or unknown and chewing up time that I would have rather spent on players I actually liked and enjoyed watching. I was curious if that was an effect that permeated the entire survey and if it was something that I could actually measure.

To really measure the if I, and the rest of the voting panel was being unduly cruel to those players at the end of each alphabetic set I had to have some way of measuring the accuracy of our votes. To do that I used a projection based on Win Shares, which if you remember explained about 70.8% of the variation in NBA Rank. This is not perfect, but enough to give us a rough idea. I projected each player’s NBA Rank based on Win Shares and then calculated the difference between that projection and each player’s actual rank. I then ordered the players alphabetically and ran the correlation between their alphabetic order and that difference.

Screen Shot 2013-10-24 at 7.59.31 PM

None of these results are statistically significant, but there was a small relationship in both the first group of players and the entire block of players overall between when voters were asked to rank them and the “inaccuracy” of their ranking. These relationships may be small because they really represent a connection that doesn’t exist, or it may be because a lot of the panel is much smarter than me. I talked with a few other voters who said they anticipated this frustration, having felt it during previous years, and cast their votes in reverse alphabetical order starting from the bottom of the survey.


Last year Steve McPherson did some NBA Rank for Hardwood Paroxysm, comparing the rankings to the initial player rankings from the video game NBA2K13. We were able to get ahold of their initial player rankings (their ratings change throughout the year) for this year’s version of the game as well and run a lot of the same comparisons. We found many similar patterns, including generally higher ratings from the game. (We also adjusted the NBA2K14 ratings using the same system Steve did to put them on the same scale).


It makes sense to inflate the values of player in a video game, no one wants to play with a digital avatar that can’t meet whatever your wild expectations happen to be. However, I find it extremely interesting that the separation between the very top tier of players is very similar in both systems. It isn’t until about the 15th best player that the two systems seem to really separate.

Here were the ten players with the biggest difference in rating between the two systems, on either end of the spectrum.

Biggest Differences

Here were the top fifteen from each system. Players who only appeared on one list have their ranking from their other system listen in parentheses.

Top15 If you take into account the idea of game play and the different purposes for each system it’s hard to argue too vociferously with either list. Unless you’re a Kobe diehard, then by all means argue as loudly as you’d like.

Hat tip to David Vertsberger (@_Verts) for his help in obtaining the NBA2K14 Rankings and for his ideas in how to break them down.

  • Dodgson

    A nice article, thanks for putting the time in. Also that NBARank tool is a lot of fun to play with!

    • Ian Levy

      Thanks for reading and commenting and I’m glad you’re digging the tool. I really liked being able to see the standard deviations this year, a whole new level of insight.

%d bloggers like this: