Pages Navigation Menu

Draft Projections

US Presswire

This is the third year I’ve published my Draft Similarity Scores and the feedback I’ve received from this project has consistently requested ways to make it more useful in predicting future performance. That’s not the purpose of the Similarity Scores per se, they’re historical, measuring and comparing past performance, but I’ve taken a stab at turning them into statistical projections.

Below is an explanation of how built these projections. If the process doesn’t hold any interest, you can skip to the projections and analysis at the bottom. For now I’ve only covered the players projected by DraftExpress to be selected in the First Round.


These projections are built from my similarity scores, using the closest comparables as a guide for each draft prospect. To begin with I weeded out comparison players in my similarity scores who didn’t stick in the NBA long enough to create a reasonably stable sample of production. I only kept players who played at least three seasons in the NBA. They also had to have played more than 400 minutes in at least two of their three NBA seasons. This narrowed my comparison pool down to 135 players.

However, instead of looking at each comparison player’s NBA production to build my projections, I looked at how their statistical production in the NBA compared to what they did in college. To get an idea of NBA production beyond just a rookie season, I looked at players’ career numbers after three seasons. I expressed this as a ratio of production, NBA Production/NCAA Production, in each statistical category. For example, John Salmons averaged 33.7 minutes per game in college and 15.6 minutes per game during his first three NBA seasons, so his production ratio was .463.

I then took the prospects I was building projections for and averaged the production ratios, in each category, of their closest comparables, to project how their production should change in the NBA. It’s important to remember that these projections are not of career numbers, or of rookie season numbers. They are projections of what a player’s career production should look like after three seasons, playing a reasonable number of minutes.

Obviously, I wanted to test how accurate these projections were. I went back and made retroactive projections for 25 randomly selected comparison players from my database. I tested projections made on the ten closest comparables, five closest comparables and three closest comparables. I also tested projections that accounted for the percentage of a player’s comparables that actually made my “3-season, no less than 400 minutes more than once” parameters. The projections that were the most accurate were those based on the three closest comparables with no accounting for the percentage of players that met parameters. That group had a correlation of -0.403 between the average similarity score of their 3 closest comparables and the accuracy of their projection, expressed as the average of ratio difference between projected and actual production in each statistical category (The higher their average similarity score, the lower the difference between their projection and their actual production).

I then also tested the reliability of the projections in each individual statistical category and took an average of the difference between the projection and actual production. Those averages in each category are listed across the top of the table, just under the category heading. In total 59.3 of the individual projections (Player/Category) fell within the average range of differences. In total, 38.2% of the individual projections were 90% accurate or more; with accuracy being defined as the absolute value of the difference between actual production and the projection. Those numbers are not as strong as I would like, but strike me as being stronger than random guesses.  Now that the structure of the system is established I can work on tweaking for accuracy before next year’s draft.


It’s important to remember that average variation for each category is really revealing a range to the projection. For example, Jeremy Lamb projected to play 25.8 minutes per game. However, in my test sample the average variation in minutes per game 5.474 minutes. This means Lamb’s projections is really 20.3 MPG – 31.3 MPG. Obviously, I can’t point to any single number or group of numbers in this table and say it’s 100% accurate. But, assuming these numbers are reasonably close to what will actually occur there are some interesting findings.

One of the players who projects very strongly is Tyler Zeller. My system projects per 40 minutes averages of 18.7 points, 10.8 rebounds, and 46.9% shooting on two-pointers. Of this group, Zeller projects as the highest scorer, best offensive rebounder, and best free-throw shooter. These projections are built on the comparables Channing Frye, Al Horford and LaMarcus Aldridge. While Zeller may not become an offensive anchor in the starting lineup, like Horford or Aldridge, he looks like someone who could have a lot of value as a second or third offensive option, especially leading a second unit. For someone who is dropping into the middle of the lottery, that seems like a really nice value.

Moe Harkless is another player who looks like a really strong pick. His three closest comparables, Luol Deng, Rudy Gay and DeMar DeRozan, all played more minutes and took on a larger offensive role during their first three years in the NBA. For Harkless, my system projects per 40 minutes averages of 17.2 points, 7.3 rebounds, 1.6 assists, 1.3 steals and 1.0 block, shooting 46.7% inside the arc. With an average similarity score of 908 this is one of the statistical profiles that is more likely to be reliable.

Marquis Teague is another player my system projects as a good value. Most other draft projections built on statistics don’t think very highly of Teague and his underwhelming college statistical profile. However since my projections are framed by the development of his comparables, he comes out looking rosy. Teague’s three closest comparables were Jordan Farmar, Russell Westbrook and Deron Williams. The comparisons to Westbrook and Williams are particularly intriguing because they also fit with Teague’s story, a player overshadowed by very talented college teammates, and a player with terrific physical tools that may translate better with experience in the NBA. Teague projects per 40 minute averages of 14.1 points, 7.0 assists, 3.5 rebounds and 1.1 steals. Obviously he has some work to do as a shooter, projecting a 2PT% of 42.7% and a 3PT% of 31.4%. Still for a player than can be had by a solid team at the end of the first round, there is potential to develop a solid contributor.

Again these numbers are all projections, not certainties. I’m sure time will reveal me to be wrong on plenty of counts. Feel free to let me know what you think my system has missed.

  • EvanZ

    I’m skeptical of Zeller’s defense at the next level due to his short reach (8’8.5″). Even discounting that, Zeller benefits from playing all 4 years. He barely got minutes his freshman year. If he had come out after last year (wisely did not), his projection would look much worse. His rebounding increased by a full 4 boards per pace-adjusted 40. Similarly, a player like Teague in his freshman season is difficult to judge. How much would a player like that improve if he played all 4 years?

    I guess what this all comes down to, is that age+production+measurements should all be accounted for somehow.

    • Ian Levy

      Thanks for the comments Evan. Leaping into projections is a little uncomfortable for me, because there are so many variables to deal with. Age+Production+Measurements are all accounted for to some degree because they are components in the Similarity Scores. Are you saying that equation needs more balancing or presence in the projections?

%d bloggers like this: