Say good bye to RPI. The NCAA just announced that it is getting replaced with a metric that actually tries to tell you how good teams are. Imagine that.
The new metric will be called NET, which I’m a huge fan of because a net is also the thing that they put on basketball hoops. It makes sense that way. I think every acronym in sports should be pigeonholed to make it spell out a thing that applies to its sport. Instead of WAR, make it BAT for Best At Teamwork. Instead of YAC, make it it HIKE for Huffingandpuffing Inthewakeof Katching Eggshapedball. Things like that.
You could, in theory, read the announcement article yourself and draw your own conclusions. Or you could save yourself the risk of having a dumb opinion that gets your laughed out of the water cooler and let me shovel my takeaways into your gullet. Basically, keep reading if you want to be cool.
To start, let’s introduce the intricate system that was RPI. It involves a series of complex iterations and a reinvention of statistics as we know it. Brace yourself; there is a lot of math to follow.
RPI = .25*(Winning %) + .5*(Opponents’ Winning %) + .25*(Opponents’ Opponents’ Winning %)
Wait, did I say complicated? Sorry I meant to say that if the NCAA asked Jay from Clerks to rant about college basketball he could come up with a better formula than this.
Look at the multipliers there. Look at them again. Again. The winning percentage of your opponents’ opponents has the exact same weight as the winning percentage of the actual team you’re evaluating. The system was in desperate need of change.
So What Does This One Do?
The NCAA Evaluation Tool, which will be known as the NET, relies on game results, strength of schedule, game location, scoring margin, net offensive and defensive efficiency, and the quality of wins and losses. To make sense of team performance data, late-season games (including from the NCAA tournament) were used as test sets to develop a ranking model leveraging machine learning techniques. The model, which used team performance data to predict the outcome of games in test sets, was optimized until it was as accurate as possible. The resulting model is the one that will be used as the NET going forward.
The things that stand out to me are:
“Net offensive and defensive efficiency”
-The fact that they tested the model on last season’s results to predict the accuracy of future results. AKA, they didn’t create a formula out of thin air and they used the advanced concepts that Ken Pomeroy and Co. have been using. More on this later, though.
Also, remember the quadrant system from last year? That’s not going anywhere, so hope you didn’t forget it.
One negative I see is that they felt the need to distinguish margin of victory and efficiency margin. Those....are basically the same thing. One just has more math, so there seems to be a risk of double counting score differentials.
The other is that NET won’t take the specific margin of victory into account beyond 10 points. Basically, it’ll see the difference between a 5 and a 9 point victory, but a 15 and 19 margin will be treated as the same.
I get the concept. The NCAA doesn’t want John Calipari keeping his foot on UMKC’s throat (#RooUp) when there’s a minute left and he wants to win by 45 instead of 40, but 10 seems a little low to me. I’d cap it at 15. That’s, realistically, at least a 6 possession game instead of 4.
This also could convince coaches to skew towards a faster style of play. If there are more possessions available in a game, there are more opportunities to score. If every other stat is equal and you post a 55% eFG% while giving up a 50% eFG% and play at Savannah State’s average pace, you’ll win the game by 8 points. If the same is applied to Virginia’s pace, you’ll win by 1. It will behoove coaches to pick up the pace to increase that total margin.
Wait, So This Idiot Already Found A Way To Game The System? Does That Mean We Can’t Trust It?
So here’s the thing.
The NCAA isn’t releasing the “formula” to us. There’s been a lot of hullabaloo about that among many of the college basketball intellectuals out there who have the interest, time and ability to piece the programming together because they can’t be totally sure how accurate the metric is unless they know exactly what’s in it.
I really do see the point that they’re making and I absolutely see the hesitancy in trusting the NCAA on anything beyond breaking a 20 (and that’s pushing it), but simplicity should also matter. As long as they’re being transparent in the aspects of scheduling and gameplay that matter most, I’m fine with not knowing everything. We’re either trusting the NCAA with the information or the select handful of people that would be able to interpret it if it were to be made public. I don’t want either of those things, but we shouldn’t be hanging our hats on one metric anyway.
What does give me pause is that we won’t get to see what every team’s ranking would be last year. It doesn’t set a good precedent for how they view their own proprietary information, or the overall accuracy for how good a team is. Teams are going to need to know how to more effectively schedule and coach. This especially affects mid majors like Loyola-Chicago, who need to know how much to strain their squads in the non conference schedule in order to secure an at-large bid, should they need one.
To keep things in perspective a little bit, RPI was such a mess and it seemed like we were stuck in it forever. The mere fact that the selection committee is even allowing the word “efficiency” to be uttered without a cacophony of hissing and shrieking is a minor miracle. There will still be the subjective opinions that make you scratch your head, but the foundation of their decisions will at least be based on a more advanced way of thinking. I’m optimistic about this move.