When I started doing this whole website thing, preseason ratings seemed unattainable. I first did them for the 2010-11 season which seemed like an incredible breakthrough and now they seem even more unattainable with all of the roster movement. Throw in the uncertainty of players using their free covid year and a myriad of pending transfer waivers and this season’s ratings were the biggest challenge yet.
Like the H.U.M.A.N. poll, the computer has identified a top tier. It’s a clear top five which is the same as the human’s top seven, but excludes Duke and Michigan State. The computer is not impressed with your brand name.
Just how clear is this break? Well, #6 Arizona has a rating closer to #14 than #5. If any of the top five aren’t safe NCAA tournament teams, it would be a catastrophic algorithmic failure. Beyond that, anything is possible.
What do I mean? Well, when using these ratings it’s important to understand their predictive limits. I’ve always loved this piece from Phil Birnbaum on the limits of baseball predictions. He calls it the “speed of light” because just like you cannot go faster than the speed of light, you can’t do better than the theoretical limit of prediction.1
I don’t know what the speed of light is for preseason ratings, but out-of-sample RMSE for my algorithm is 5.4 for a team’s adjusted efficiency margin.2 This means that 2/3 of the time a team’s final AdjEM will be within 5.4 points of their preseason AdjEM.
So, for instance, there’s a 2-in-3 chance that Baylor’s final rating is between 23.13 +/- 5.4 or 17.73 to 28.53. Based on last year’s final ratings, that’s a range between 26th and 2nd. And there’s a 1-in-3 chance they’ll be outside that range!
Which (a) makes some sense and (b) highlights how silly it is to discuss a team’s future with so much certainty this time of year. Like, I get why people have to be overly-certain in their takes. It’s how society functions. If you don’t play by those rules you end up with a life of coding web sites at home in your jammies. But it’s a misrepresentation of what we don’t know before the games are played.
As far as the guts of my system, I include the last five seasons of team data and two seasons of conference data (using the current season’s membership), plus returning production, transfers, and notable freshmen, along with coaching changes. Independent forecasts are made for offense and defense.
Transfers are a bit of an issue this season. While other systems seem to assume all waivers are going to be granted, the ratings here assume pending waivers will not be granted.3 In some cases, this makes little difference in the ratings (Efton Reid to Wake Forest, for example), but in cases like Cam Hayes to East Carolina, Jaylon Tyson to Cal, and Aziz Bandaogo to Cincinnati, the difference is significant.
There seems to be a veneer of seriousness from the NCAA regarding waivers for two- (or three-) time transfers these days, so it seems more likely that a waiver gets denied than not.
Plus the snail’s pace at which the NCAA is ruling on these waivers suggests that at least a few of these decisions may linger beyond the start of the season. Undoubtedly, I’ve missed a few waiver cases and we’ll have a ratings update or two before the season to correct errors and roster changes.
For injury situations, I remove players who aren’t expected to be ready until conference play. This can be a guessing game given increased secrecy by coaches. Notably, Tolu Smith and Zach Freemantle are excluded from the projections.
Other fun notes:
The most bizarre ranking from the computer is McNeese State at #187. It’s not crazy to make McNeese the favorite in the Southland. The bar is pretty low there and Will Wade has brought in enough talent to clear that. But the Cowboys haven’t cracked the top 200 since 2002, so this is a bananas call. On a related note, one of the more intriguing games on opening night is McNeese State at VCU. Two new coaches (and one not actually on the sidelines) with new rosters. You can hardly know what to expect.
The cringiest ranking is Saint Mary’s at #38. The system values transfers these days and “returning everybody” isn’t what it used to be. Saint Mary’s doesn’t add any transfers of note and they don’t actually return everybody. There’s a conference effect in the algorithm as well,4 and losing BYU from the WCC just makes that effect worse for the Gaels. But Saint Mary’s hasn’t finished lower than 38 since 2015 (save for the covid season) and the H.U.M.A.N. poll (19th) figures to be closer to the truth.
Coaching changes matter in the ratings and I’m assuming Jared Grasso is out at Bryant. It seems like a safe assumption. This costs them a few spots in the ratings and makes UMass Lowell the clear choice for the second-best team in America East. There’s your America East report.
BYU was picked next to last in the Big 12 by its coaches but is rated 8th in the league here. One thing to keep in mind is that last season’s team posted the best point differential in league play by a team with a losing conference record since at least 1997.5 All of their conference wins were by double-digits, while all but one of their losses were by single digits. Yes, it was the WCC, but the takeaway is that the Big 12 is not getting a complete doormat this season. It’s also worth noting that UCF had the 7th-best point differential since 1997 for a team with a losing conference record. They were picked last in the league, but they should win a few games.
Well, you can in the short run, but it would be the result of luck and not skill.
And that includes seasons back to 2006 when player movement was much more restricted than today. We should expect that error to increase going forward. I’m a supporter of fewer restrictions on player movement for humanitarian reasons, but the fact that it makes the sport less predictable is a bonus.
Which seems like the way to go since these players aren’t eligible.
The conference effect is incredibly useful for the vast majority of teams.
Which is as far back as my data goes.