The original BCS formula was revealed on June 9, 1998, ending 129 years of college football arguments with cold, hard math. Half of that sentence is a lie.
The original formula combined four factors into one ranking. Let’s review how each changed during the life of the BCS and how each would help develop the eventual College Football Playoff committee’s methods.
1. The polls
*The average ranking of the AP and USA Today/ ESPN polls.
Polls had been ranking college football teams for 62 years and deciding champions for decades. Those are the two biggest polls. This made sense.
How the BCS constantly fiddled around with this part: The AP would ask out after 2004, when there’d been five unbeaten teams and only two title game spots, plus Texas coach Mack Brown calling for voters to give his team a Rose Bowl bid over Cal (voters did, arguably fairly). Reporters didn’t feel they should be responsible for influencing the sport they covered.
The AP would be replaced by the Harris Poll, an invention by some marketing company. No one has ever intentionally paid attention to it.
The Coaches Poll remained throughout, which is the funniest part, since it meant athletic departments voting on their own postseasons. Its title trophy — the crystal football — was guaranteed to the BCS’ winner. The AP’s wasn’t, so in 2003, LSU and USC split the two, something the BCS was supposed to have eradicated five years earlier.
Also after 2004, ESPN would drop from the Coaches Poll’s name, leaving USA Today as the only major media outfit putting its name into the rankings’ construction.
What the Playoff committee learned here: Humans wanted humans in charge, but preferably humans who don’t have skin in the game. The committee publishes an annual list of its members who can’t weigh in on certain teams due to financial/familial ties (which helps spotlight the committee’s lack of mid-major representation).
2. The computers [laser gun noises]
This was the big revolution. Unfeeling machines, combined with wise human eyeballs, giving us a perfect cyborg of rankings.
*A formula based on the average of the computer-generated rankings produced by The Seattle Times, Jeff Sagarin for USA Today and The New York Times.
How the BCS constantly fiddled around with this part: Sagarin remained all the way through 2013, but the two papers’ computers would be replaced by formulas similar to Sagarin’s: the Anderson & Hester, Billingsley, Colley, Massey, and Wolfe. Others would cycle in and out.
The NYT, for one example, was dropped because coaches and admins didn’t like that it relied on margin of victory, one of the best indicators of team quality. The BCS didn’t want to encourage teams to beat up on weak opponents. The BCS also asked longtime contributors like Billingsley and Massey to neuter their numbers by taking out margin of victory.
“We didn’t think it was right and the coaches didn’t want a system where the more you score, the higher you rank,” former BCS exec Grant Teaff said in 2001.
Fun fact: those who ran the computer components could’ve toyed with the numbers however they wanted. Each ranking was pretty much just some dude making a spreadsheet, then telling the BCS what the spreadsheet said. One time, the Colley neglected to include an FCS game, resulting in four FBS teams being ranked incorrectly in the actual BCS. Imagine if one had ranked #2 instead of #3!
What the Playoff committee learned here: No stats that can’t be explained by normal humans. All numbers must be provided to committee members in-house, not via some fella’s calculator in Delaware. And all numbers are for the edification of humans, who make the final calls.
That’s the messaging, at least.
Here’s the most college football sentence of the day: the new numbers, which the public can’t see, are transparent and easy to understand, according to the people who see them.
We barely know which stats the committee uses, and the ones we hear about — like the famous “game control,” which former chairman Jeff Long basically made up on air, because it wasn’t the same thing as ESPN’s Game Control stat — aren’t explained. The suits are still delicate about margin of victory, too.
The computers were often the thing people hated the most about the BCS. They’re now a thing people miss about the BCS. At least we knew the names of the mysteries that were going into the rankings, and at least we could see whether team #7 was near or far behind #8.
3. Basketball arithmetic
*A calculation derived from the cumulative won-lost records of the team’s opponents and those of the team’s opponents’ opponents.
That’s RPI, the strength-of-schedule metric also used for years by the NCAA’s basketball committee.
How the BCS constantly fiddled around with this part: It would drop out of the formula after the 2004 controversy.
What the Playoff committee learned here: I don’t know, though they do use raw-math stats like this, the kind someone could calculate at home without knowing a secret algorithm. You’ll hear “record vs. currently ranked teams” and “record vs. current top-10 teams” every year.
They then rank mid-majors about 10 spots lower than what these stats call for, if you compare big-picture average rankings in the CFP to where those teams usually land in a simple résumé assessor like CPI (or something more advanced, like the Massey computer composite). That’s been the one obvious flaw with the CFP rankings since 2014: non-powers rank worse in the official top 25 than they do in almost literally any other ranking.
4. Brute-force math, added to make sure some five-loss team didn’t goof its way into the title game
*The team’s record. Each loss adds 1 point to the total.
You might not remember this BCS component. I’d forgotten it! Kinda encouraged teams to schedule lightly, right?
How the BCS constantly fiddled around with this part: It too would leave after 2004.
What the Playoff committee learned here: When you boil it down, number of losses is still — years after the BCS ended — the most important factor for power-conference teams, as 2016 Penn State and 2017 Ohio State would attest.
(Still, the committee’s showed some willingness to stray beyond strict win-loss clustering, like when it dropped a deceptively unbeaten Florida State to #3.)
One corollary is the list of stuff the committee says it uses as tiebreakers between teams it likes equally: conference titles, head to head, performance vs. common opponents, and strength of schedule. Knowing college football fans care about those things, the committee talks them up and applies them in unknown ways, even though they rarely (but sometimes) outweigh losses.
The early-aughts controversies led to a simpler formula, which would actually produce the 2011 controversy that’d help kill the BCS.
Giving more weight to the polls and less to the computers was meant to avoid situations like the human media declaring one team the champ while machines arranged for two different teams to meet in the title game. (Again, that happened.) This was back when people didn’t want computers running their lives, before we started making AI cars, sex robots, and self-driving toasters.
But making the polls twice as powerful as the computers helped give Bama a rematch against LSU in 2011. The Tide made it by topping Oklahoma State, which ranked #2 in the computers.
Everyone hated the all-SEC rematch, TV ratings tanked, and like five months later, the Playoff was officially announced. The computers tried to give us Oklahoma State! Bleep blorp!
BCS chaos probably helps explain the Playoff committee’s hesitance to meddle with its process.
In 2019, the Playoff works exactly the same as it did in 2014: a dozen or so people go into a room, spit out rankings to ESPN, and send one member forth to briefly not explain anything to Rece Davis.
We can then sort of figure out why they did what they did, but it’s up to the public to reverse engineer the whole thing.
After a decade-plus of jokes about the BCS’ annual evolutions (we haven’t mentioned the 1999 Kansas State rule that assured top-ranking teams would make BCS non-title bowls, the brief “quality win” rule that added bonuses for beating top-10 teams, non-power teams finally earning entry if they met certain criteria, and so on), all following the proto-BCS iterations of the mid-‘90s?
Decision-makers just wanted to pick something and stick with it for a while.