This season, ESPN released a new tool in its analytic arsenal: its College Football Playoff Predictor. The tool, according to ESPN, is a model that “is derived from the committee’s past behavior in its rankings (both in-season and on selection day) throughout the first four years of the playoff system.”

ESPN has used this Playoff Predictor throughout the season. It posts consistent updates on the college football section of their website. The Predictor is quoted in most articles by Senior Writer (and main CFP analyst) Heather Dinich. It is referenced in on-air broadcasts. In short, it’s a tool that ESPN is relying on heavily to monitor the College Football Playoff and predict how the committee will decide.

Unfortunately for ESPN, the Playoff Predictor is also pure nonsense.

Logically, we should realize that any predictive model of what the selection committee will do is rubbish. The committee is not a monolithic body; members rotate out every year. In addition, members have dropped out and not been replaced so at least half the time the committee has been in service, the entire 13-person committee hasn’t been able to decide.

We can certainly note trends–I myself do that with the committee rankings every week–but that doesn’t mean that we can predict what the committee will do before the season starts. I try to learn from the committee each week so that its weekly rankings will give us hints as to what the final rankings will be. I don’t pretend to be able to read minds; just to base off what the committee has shown us so far.

That logical issue is far, far from the only problem with the Playoff Predictor, though. Let’s look in-depth at what ESPN claims the Predictor does, and why it’s flawed.

ESPN’s factors

ESPN leads its introduction to the Playoff Predictor with this explanation: “And through study of the committee, ESPN Analytics identified five key factors that determine each team’s chance to reach the playoff.” Again, leaving aside the issue that the committee is different every year, let’s evaluate these claims on their own.

1. Strength of Record (how much teams have accomplished)

2. FPI (how good teams are)

I’ll deal more with strength of record and FPI later, because that is the biggest issue. I will point out, however, that according to the official CFP protocols, the selection committee is not allowed to consider ESPN’s FPI, for two reasons. Here are the relevant protocols:

“While it is understood that committee members will take into consideration all kinds of data including polls, committee members will be required to discredit polls wherein initial rankings are established before competition has occurred;”

“Any polls that are taken into consideration by the selection committee must be completely open and transparent to the public;”

Yes, FPI is not technically a poll. Maybe this technicality matters, but I doubt it. Additionally, FPI is far from “completely open and transparent to the public.” FPI is proprietary, and no one (other than those at ESPN who program it) knows what is in it. The only description we have comes from ESPN:

“The Football Power Index (FPI) is a measure of team strength that is meant to be the best predictor of a team’s performance going forward for the rest of the season. FPI represents how many points above or below average a team is. Projected results are based on 10,000 simulations of the rest of the season using FPI, results to date, and the remaining schedule. Ratings and projections update daily.”

This is not a very descriptive or meaningful explanation for the ratings. We don’t know how FPI simulates the games or what goes into them. How are incoming freshmen treated? How are injuries accounted for? What do things like turnover luck and individual matchups mean to the rankings? No one has ever done a full study (that I am aware of), about what FPI’s success is as a predictive model. How often does it correctly pick games, both straight up and against the spread? These are very important questions, and ESPN doesn’t give us the answers.

SB Nation’s Bill Connelly also has a metrics-based rating of teams. His S&P+ ratings are far more open. He explains, weekly, what factors go into the ratings. He releases full box scores to show how a team really did and why the score alone might not be truly indicative of a team’s efficiency. Connelly tells the fans as much as he can while still keeping the rankings proprietary. In short, S&P+ is everything that FPI isn’t.

3. Number of losses (incorporated into SOR but the committee places even more emphasis on losses)

This is one thing that ESPN is clearly correct on. I have noted that the number of losses still generally determines the rankings, especially at the top. Sure, undefeated Florida State was behind two one-loss teams in the final 2014 rankings. That didn’t keep the Seminoles out of the Playoff, though.

In fact, the only time we’ve ever seen a two-loss team ahead of a one-loss team towards the top of the final rankings was 2015. Stanford finished the year at No. 6, ahead of No. 7 Ohio State. The Cardinal had a clearly superior resume, including one of the best strengths of schedule in the country. Ohio State looked impressive and had a ton of NFL talent (FPI really liked Ohio State that year), but could only pick up one ranked win. The Cardinal still finished below Iowa that year, though.

4. Conference championships

This is an easy claim to make, but a much harder one to substantiate. Everyone assumes that what got Ohio State into the Playoff in 2014 was its conference championship game. Jeff Long did say back then that the way Ohio State won that game pushed them over the edge. It’s also worth noting that Ohio State had a better strength of schedule and more quality wins than Baylor. TCU’s resume was close to Ohio State’s (each had three wins over committee-ranked teams), but the Horned Frogs lost to Baylor head-to-head. Also, Baylor and TCU were each conference champions in 2014. We can point out the Big 12’s cynicism in claiming them as such, but the committee has never given any indication that they pretended they weren’t conference co-champions.

Leaving Ohio State in 2014 aside, there are plenty of other examples of the committee not quite respecting a conference champion. Iowa was ahead of Stanford in 2015, even though the Cardinal had a better SOS, resume, and were a conference champion. 2016 saw Ohio State–with a clearly-superior resume–ahead of Pac 12 champion Washington (and Big Ten champion Penn State). The committee clearly gives some weight to conference champions, but it’s very clear that it matters far less than the resume.

5. Independent status (Notre Dame can’t be a conference champion, but all else being equal it might get more credit than a team that didn’t win its conference championship)

It’s really hard to know if this is meaningful at all. Does Notre Dame get some special treatment from the selection committee? Logic dictates they would–everyone involved in college football has some bias (towards or against) Notre Dame. That’s just a fact about the Irish. But Notre Dame has only really been ranked when it has a strong resume, and in that case the rankings have seemed fair.

Also, as a total aside, it’s pretty cynical of ESPN to refer to this category as “independent status.” BYU isn’t getting any special treatment for being an independent. Army and UMass certainly never have. This is a nice way of ESPN claiming that the committee is biased towards Notre Dame. Notre Dame has consistently received the benefit of the doubt from the committee (more than any other team not named Alabama), but it’s tough to claim that has been undeserved. The Irish consistently have had quality wins and a strong SOS, as well.

How can you predict the committee?

This passage shows the biggest problem with the Playoff Predictor:

Strength of Record is the most important factor. Fifteen of the 16 playoff teams in the past four years have ranked in the top four of Strength of Record on selection day.

How is ESPN claiming that its Strength of Record metric is an accurate predictor for CFP selection? The committee has changed their talking points of what matters over the years. For most of 2014, it was “game control,” an entirely subjective factor that was just a euphemism for “eye test.” That factor seems to have gone away; at least, the words “game control” are no longer used.

The committee has also shifted from talking about wins over ranked teams to “wins over teams with .500 or better records.” Of course, the committee has ignored this at times, like with LSU in 2016, but the committee has never been consistent in doing what it says it does. That’s part of the absurdity of ESPN’s Playoff Predictor.

The committee also doesn’t seem to use any SOS metric. Back in 2014, Jerry Palm explained that the committee seems to just address SOS by eyeballing it. They look at who teams have played and those teams’ records, and that’s enough. How can a fancy “Strength of Record” metric account for that? It just can’t.

The problem with FPI

For the conspiracy-minded, the above quote might indicate that ESPN is giving numbers to the selection committee, which the committee uses. As a fun fact for those anti-ESPN conspiracists out there: The Wikipedia page about the CFP used to say that “advanced statistics and metrics from ESPN are expected to be submitted to the committee,” but that line no longer appears.

Of course, the selection committee is made up of experts and analysts in football, who are well-aware of ESPN’s conflict of interest. Taking proprietary metrics from the one entity most invested in how many people actually watch the games would be foolish in the extreme. ESPN doesn’t get to sit in (or have a reporter sit in) on the committee’s meetings. There is no reason to believe that the committee uses FPI at all. Given that, as stated above, it’s explicitly against the protocols, I highly doubt the committee uses it.

Of course, what the committee does do is watch television. Members probably read articles on ESPN.com as well. ESPN is by far the biggest carrier of college football and airs the CFP. ESPN also has the most vested interest in maximizing ratings for the College Football Playoff. Back in 2015, when there was concern over the CFP viewership on New Year’s Eve, Disney (the parent company of both ESPN and ABC), threw lines about watching the CFP into ABC daytime-soap General Hospital. Committee members absolutely must recognize the conflicts of interest before considering any proprietary FPI metrics they see on ESPN.

I am not particularly conspiracy-minded. The committee has a lot of members, and they rotate; it’s always impossible to keep a conspiracy that so many people know about under wraps. What ESPN is doing, though, is preparing and influencing public opinion.

Final thoughts

As long as what goes into FPI remains private, we can never know if ESPN is influencing public opinion to suit ESPN. The conflicts of interest are obvious and I doubt anyone at ESPN would deny them. When CFP viewership is higher, ESPN does better. And when certain teams are in the CFP, viewership is higher.

At its absolute best, ESPN is using this Playoff Predictor to claim an expertise that doesn’t exist. It can’t exist. We can only do our best to understand the committee each year–I certainly try to.

Of course, most of the selection committee’s decisions have been obvious anyway. There should be consensus on at least three top teams every year. Of the CFP’s 16 selections, 13 should have been essentially unanimous. Ohio State in 2016 was obvious to anyone who doesn’t believe that conference championships should trump all. The only selections up for debate were Ohio State in 2014 and Alabama last year. It’s not hard to make a program that would accurately look back and predict at least 14 of the committee’s 16 selections.

The problem is what ESPN is using this for. They’re pretending they’ve cracked the code to the selection committee. They haven’t, because there is no such code. The committee is a group of human beings. The group changes, and the minds of people can change. They also discuss the rankings with each other, so different conversations can yield different results. One year, offensive prowess might be deemed more significant. In other years, maybe the committee will like defense better. With SOS being eyeballed, there is no real common feature to link SOS ratings every year.

ESPN wants us to believe that they’re the end-all-be-all place for college football information. That includes both watching games and expert analysis. They’re trying to convince us that it also includes looking ahead to who will make the Playoff. That final part, at least, is utter nonsense. It’s a marketing tool, not an actual Playoff Predictor.

About Yesh Ginsburg

Yesh has been a fan and student of college football since before he can remember. He spent years mastering the intricacies of the BCS and now keeps an eye on the national picture as teams jockey for College Football Playoff positioning.

1 thought on “ESPN’s College Football Playoff Predictor is pure nonsense

  1. Best observation: “-everyone involved in college football has some bias (towards or against) Notre Dame. That’s just a fact about the Irish.” That is, where ya gunna get an objective analysis re ND?

Comments are closed.