The AP Poll is an unavoidable part of college football fandom. For most of the season, it generates little numbers that go next to team names on scoreboards and schedules, and people spend a lot of time each Sunday afternoon arguing about which numbers are too big or small.
That arguing often includes calling the poll inherently harmful, biased, irrelevant, and so forth. These don’t quite fit, except perhaps in some edge cases. I don’t think it actually damages anything (let’s discuss below), the biases of individual voters usually balance out, and it’s clearly still relevant — just in different ways than before.
That’s not to say the AP Poll is beyond reproach. A more accurate poll would come out much later in the week, since voters have little time before Sunday morning to watch extra games, review dozens of box scores, or compare computer ratings (most computers don’t even publish rankings before AP ballots are due). Adding voters not of the newspaper persuasion would also be ideal, though college football has tried all sorts of other compositions that haven’t really lasted. And we might like to see advanced stats explicitly mixed in, though everyone hated the last time we tried that.
The AP Poll hasn’t been part of the official title structure since 2004, and that’s for the best — but it still offers plenty of value. Let’s make a roundabout approach.
College football has so few consistent historical standards. We don’t even agree from era to era on how to decide a champ, let alone who the champs were.
For one illustration, look at how much of the Wikipedia article on sports rating systems is about people trying to figure out just one sport: college football.
- Since 1933, NFL teams have had one chain of goals: win the division, continue winning, and win the title game.
- Meanwhile, for the majority of college football’s history, going undefeated might not even guarantee you a bowl game, and even winning one of those might not count for anything.
While we can see the 1933 Chicago Bears were entirely deserving champs, based on dominating their half and then beating the other half’s best team, major CFB has lacked that clarity since the 1870s ended.
This is ultimately because we have such a sprawling array of teams, ranging from two in the entire sport to 130 just at the highest level. In most sporting leagues, everybody plays almost everybody, while no FBS team can ever play significantly more than 10% of its alleged peers. But the many attempts to make some sense of that sprawl have only added to its historical complexity.
Outside of conferences, we can’t rank teams by standings (and sometimes, even conference standings make no sense). We have to rank by guessing.
For well over a century, people have been guessing about how to compare college football teams that haven’t played each other.
We’ve had various media polls since the turn of the last century, when we had about 40 teams, but plenty of sportswriters and historians debated national champs throughout the 1800s.
Computer ratings are also nothing new — in 1926, a Chicago clothing company paid for an Illinois professor to make crude mathematical rankings, Knute Rockne then asked for those rankings to go back to 1924, and that is how Notre Dame claimed its first-ever national title.
These things come and go. For just a taste, look at the NCAA’s official list of “selectors,” aka the FBS team-rankers it recognizes as existing, even though it doesn’t endorse any. Look at the randomly overlapping ranges and short life spans of many of these things:
So for most of CFB history, we only have one historical gold standard. Yes, it’s also based on guessing, but that is unavoidable.
The AP Poll has barely changed since 1936, expanding into the preseason in 1950 and post-bowl season in 1968 while also expanding to 25 teams along the way. Otherwise, it’s been the same idea since day one.
That means we can go back and compare teams from the same year or differing years based not only on what happened in limited head-to-head samples, but also in the public hive mind at the time. If we’d been alive in 1938, most humans very likely would’ve ranked that year’s teams similarly to how the AP did.
(Note the Dunkel Index is the NCAA’s other selector that’s been with us for more than half of the ride. Any good computer rating can go way back into the vaults — I like Sports Reference’s SRS for this, all the way back to 1869 — but don’t forget the value in seeing what actual humans thought.)
The AP is flawed and will always have some degree of groupthink, but having a single outlet that provides steady pictures of what the group thought at each point is valuable.
When the AP crowned 1983 Miami over a superior Auburn that’d handled a far harder schedule and had a large transitive win over the Canes, that placed a time capsule for us years later. Now at a glance, we can see the narrative power of the final play against Nebraska, the Canes spending the season quietly stalking the Huskers, and Miami’s rags-to-riches arc.
When you combine that time capsule with what smart math and clinical historians later added (I tend to side with those who favor Auburn here), you have a complete story. It’s not like Miami was a bad human choice! Miami was a revealing human choice that communicated a lot about how people viewed a season in real time.
Now here’s the real fun. We can use the top 25’s archives not just to find the team the media liked most.
We can use it to help point us toward the biggest game in Tulsa history (the 1942 Golden Hurricane came a late touchdown against Tennessee away from a legit natty claim), most overrated team of 1979 (Michigan State), most difficult-to-rate school in the country (without question, it is Auburn), most consistent team of the 1990s (you’ve heard FSU’s AP Poll stats many times), and so on forever.
Best of all: it doesn’t even matter! Consequence-free history!
Despite how much we yell about preseason rankings, they have roughly zero demonstrable bearing on the College Football Playoff rankings that debut months later. I compared several years of preseason AP rankings with the CFP committee’s subsequent rankings, finding several teams that would’ve been ranked very differently, if the committee had actually been under the AP’s lingering spell.
In fact, every year, you can see the AP’s rankings reacting to the CFP’s, rather than the other way around.
(Of course, we still need things like the AP to produce final season rankings, since the committee disappears once bowls are set.)
So the best use of the AP is not a thing to argue about every seven days. The small fluctuations do not matter.
An AP spot is not of defined objective value, like length or volume or temperature. For that, we have things like advanced analytics and, you know, wins. Instead, it’s an attempt to quantify subjective value in exactly the same way as in 1961 or 2006 or any other year, giving a chaotic sport one of its very few constants.
Now that the Playoff committee’s December rankings are the only ones that matter during the season, the AP is a log of national narratives along the way. If Alabama is overrated this week according to the computers and Playoff committee and your eyeballs, then that is information for us to parse.
I think this is one of many examples of college football things that can make you mad if you look at them too closely, but that can make you smart if you zoom out a little.