Around the Hall: NCAA unveils “NET” ranking system to replace RPI
Around the Hall is recommended reading from the Inside the Hall staff.
In a release on its website, the NCAA announced a new ranking system called “NET” that will replace the RPI:
The NCAA has developed a new ranking system to replace the RPI as the primary sorting tool for evaluating teams during the Division I men’s basketball season. The new ranking system was approved in late July after months of consultation with the Division I Men’s Basketball Committee, the National Association of Basketball Coaches, top basketball analytics experts and Google Cloud Professional Services.
The NCAA Evaluation Tool, which will be known as the NET, relies on game results, strength of schedule, game location, scoring margin, net offensive and defensive efficiency, and the quality of wins and losses. To make sense of team performance data, late-season games (including from the NCAA tournament) were used as test sets to develop a ranking model leveraging machine learning techniques. The model, which used team performance data to predict the outcome of games in test sets, was optimized until it was as accurate as possible. The resulting model is the one that will be used as the NET going forward.
Neil Greenberg of The Washington Post writes that the NCAA didn’t offer much transparency in how the rankings will be calculated:
Unfortunately no specifics were given Wednesday as to how the new elements of the ranking are determined, or who would be calculating new statistical categories in the system, such as net offensive and defensive efficiency. RPI wasn’t perfect but it was transparent, with several websites offering an RPI ranking list based on public-facing data.
Over at his personal site, John Gasaway of ESPN eulogized the RPI:
The Ratings Percentage Index, a misbegotten multi-sport statistic that mistakenly became an object of misplaced obsession for everyone connected with men’s college basketball, died Thursday at the age of 38. The death was announced by the metric’s lone sponsor and last surviving adherent, the NCAA.
No official cause of death had been announced by Thursday afternoon, though the RPI had long suffered from complications associated with chronic analytic confusion.
Dylan Burkhardt of UMHoops asks a fair question: Will NET be an improvement over the RPI? ($)
If you are building a predictive model, margin of victory needs to be included. Capping the margin of victory at 10 is a decision that doesn’t really make any sense. There’s a significant difference between a 10-point win and a 30-point win, even if feelings get hurt. Ken Pomeroy outlined this phenomenon quite eloquently in a well-researched blog post back in 2013.
If the plan is to build a predictive model, build the best predictive model. Don’t limit the model due to “sportsmanship”.
I do wonder whether the inclusion of net efficiency could be a backdoor to still account for these wider-margin games. There’s no mentioned cap here in terms of tempo-free efficiency which could lead to the same result.
Matt Norlander of CBS Sports writes that NET is a major step forward:
And, as betting markets and empirical data have shown over the years, predictive models tend to carry with them big-picture accuracy. In order for the NCAA to definitively declare that it was making an evolutionary step forward in how it ranks and sorts its teams for NCAA Tournament inclusion, going the predictive-model route was a must.
So the NCAA should be commended for making its leap, even if it’s about a decade overdue. Even though, in what comes as a mild disappointment, the NCAA opted out of using a composite of multiple predictive metrics. That would have made for an even more accurate echelon — but potentially complicated matters for the NCAA because it did not have ownership or control of each individual metric it would have been outsourcing for such a master ranking.
ESPN’s Joe Lunardi is optimistic about the NET rankings:
Hopefully NET turns out to be as advertised: A model that optimizes both performance results (e.g., “most deserving” teams) and predictive data (an objective version of the so-called “eye test”) to rank a widely disparate Division I more accurately. Presumably, if not already, the formula should be detailed to the 32 Div. I conferences — for scheduling and other evaluation purposes — as well as its rankings made public to fans throughout the course of the season.
In a perfect world, such a significant change would have been announced long before schools completed their scheduling for a new season, but NCAA and “perfect world” are rarely used in the same sentence. What matters more is some kind of commitment from the committee that it is going to consistently apply its new tool for a foreseeable period of time.
Rob Dauster of NBC Sports writes that a new ranking system was a long time coming:
There is a lot to take in here, and to avoid getting to into the weeds when it comes to the nerdy part of the analytics, this is what you need to know: This metric will be impacted both by the predictive nature of metrics like KenPom as well as purely results-based metrics like the RPI. The difference is subtle but important. Predictive metrics are generally based on things like efficiency and are not as impacted by something like a buzzer-beater going in and changing the outcome. Results-based metrics are, obviously, as they change the result of the game even if it shouldn’t impact how good you think either team is.
Why is it important to include both?
Because we want those buzzer-beaters to matter, right? That’s why it’s worth getting so excited when they go in. Winning needs to matter, otherwise there’s no point in playing the game. But losing a nail-biter is not the same as getting whipped by 25. That should matter, too. I’m glad both will be factored in.
(Photo credit: Drew Angerer/Getty Images North America)