clock menu more-arrow no yes mobile

Filed under:

Five Questions with Brian Fremeau of Football Outsiders

As the popularity of advanced statistical analysis of sports has grown in recent years, Football Outsiders has been riding the crest of that wave. FO's site and publications mostly cover the NFL, but they've started to delve more into the college game of late with the additions of Brian Fremeau and Bill Connelly. They each have developed their own ranking systems - the Fremeau Efficiency Index, and Connelly's S&P+, the metholodgies behind both available on their respective pages. S&P+ has not been very kind to Rutgers all year because of the team's poor strength of schedule, but I wanted to talk to Fremeau because of a very curious result that came out of his findings (explained below) I recently fired off a few questions to Brian, who kindly agreed to respond here.

I'm not surprised that your Efficiency Index (FEI) ranked Rutgers as 99th in offense this season, but didn't expect the Scarlet Knight defense being 71st. Rutgers played an awful schedule, and those numbers take into account bad performances against Cincinnati and Syracuse. From my observations though, it appeared to be a good defense, albeit with specific strengths and weaknesses. Is this low ranking related in any way related to the team's turnover margin or field position? (note: FEI ranks Rutgers second in FBS in field position, the NCAA ranked RU second this year in turnover margin)

The short answer is yes, field position and turnover margin can make a defense (or offense) appear to play better than it really had in a game. Since the offensive and defensive efficiency metrics I use neutralize opportunities and success for expectations based on field position and opponent strength, a team can limit an opponent to only a handful of points scored but still have not played a very strong defensive game. There is also a "relevance" number I use for the overall team OFEI and DFEI scores, and as referenced in the FEI definitions, poor performances against poor teams receive extra wieght, so the Syracuse game (which ranks among the worst single-game defensive efforts of any team in 2009) drags down Rutgers' defensive rating pretty significantly.

Let's talk turnover margin, which is typically used as an indicator of teams having abnormal luck swings. Rutgers was 10th in 2006, and subsequently fell off in terms of record and turnover margin the next season, even though they arguably had a more-talented roster. Now they're #1 (note: #2 following the bowl season), and past top-ranked teams like Minnesota (2005), TCU (2006), Kansas (2007), and Oklahoma (2008) all won fewer games the following year. How good is this statistic as a general indicator of teams that will regress? Is this number misleading because so many of RU's recovered turnovers came against overmatched non-conference teams?

One of the first principles of Football Outsiders as pertains to the NFL is that interceptions and forcing fumbles are indicative of team skill, but recovering fumbles has more to do with pure luck. When evaluating turnover margins on the NCAA level, is it more informative to just look at fumble recoveries? With shorter seasons, far more teams, and a much wider disparity in talent, how applicable are FO's NFL findings to the college game?

I think it is generally true that teams with abnormal luck one season can be expected to regress the following season. We haven't done any sweeping studies on the phenomenon for college football, though, but I've observed some of the same anecdotal data you have. I think you might be right that mismatched opposition might cloud the observations, so that would definitely be something worth paying attention to. But, as you say, the frequency of turnovers in general and the relative infrequency of games in college football will make a conclusive study difficult.

One of the reasons that I like DVOA, FEI, S&P+, and similar offerings is that they go beyond wins and losses. Sometimes the better team falls victim to a few unlucky bounces. One reason that I was so down on Rutgers this year was that the offense struggled to maintain drives, which put an awful burden on the defense. The team kept winning with turnovers, trick plays, and special teams, which didn't seem sustainable, and ultimately wasn't. Is this a good mentality (e.g. forget the smoke and mirrors, good teams need to be consistently effective) for evaluating teams, and predicting future performance?

Yes, I think so. Boise State is a somewhat famous example of a team that appears to have trick plays exist as a significant portion of their identity, and I don't brush off strong special teams play as an important factor, but I definitely agree that unless a team can play well possession-by-possession consistently, I don't think there's much to hang their hat on.

That's what we're working on. I think OE and DE are solid metrics, but there are disguises there, too. Cincinnati was the most efficient team in the nation in terms of points earned from starting field position expectations. And yet, they only ranked in the 30s in terms of fewest 3-and-outs. There's something interesting there and its something I'm going to investigate this offseason to improve the way I use drive data and which things are important.

Last spring, an article in the Wall Street Journal posited that returning experience on the offense line predicted future success. That seemed intuitive enough, although I was dubious about their methodology. In 2008, Rutgers broke in a young offensive line, and that group improved as the season went on. Everyone was back this year, and even though that unit was expected to be a strength, it was one of the worst in the country, and one of the direct causes of this year's struggles. Was the WSJ full of it, and can you think of any reason or precedent for why a position group would suddenly fall of a cliff?

I remember reading the same article, but I too wonder about its reliability. I haven't had a chance to research it myself. Anecdotally, I've seen position groups of my own alma mater, Notre Dame, perform bizarrely year to year. The Irish secondary was supposed to be a strength of the team this season but they played disastrously at times. It was, I believe, an extension of coaching changes (position coach became assistant head coach, new DC, possible confusion about scheme and assignments resulted) and weaknesses in other areas (young D-Line couldn't produce any pressure, leaving the secondary suceptible). I've seen young o-lines play well, I've seen massive turnover on one side of the ball not impact a team as much as one would expect, etc. I think in a macro sense, there are reasons to believe the WSJ article, but for individual teams, there's going to be some variance.

Is the college game less conducive to statistical analysis than the NFL? Do you have anything interesting planned on the horizon for the site or next year's almanac, like something corresponding to DYAR for individual skill players? College gamecharting is terribly unfeasible (it's too bad, because I'd love to say, know how often Greg Schiano's defenses blitz), so is there any other way to gather more data?

I looked at Bill Connelly's chapter "Recruiting and the Ruling Class" in the 2009 Football Outsiders Almanac with great interest. Now, I think the holy grail on this topic would be a regression analysis to determine exactly what the correlation is between (a) recruiting class rankings and winning, and (b) player ratings with starts, counting statistics, team wins, etc... The only attempt along these lines that I'm aware of was a recent Univ. of Oklahoma study, which was very limited in scope. Understandably so, because a detailed study like this would be difficult, with how much labor and effort that would be required, and the inherent design difficulties involved. Do you have any thoughts on what future work on this topic should look like, and do you or Bill have any intentions to revisit it in the future?

I've definitely got projects scheduled for the off-season to expand the drive data analysis I've done thus far. I do think there are definitely limitations on how far that analysis can go -- we don't have the resources to chart every game (especially if they aren't televised, of course), but there is much more we can do with the data we do have that we haven't had the chance to yet.

Bill is definitely going to be expanding the recruiting analysis he started in last year's FOA, and I'm fairly certain he has a plan to start working with individual player ratings.

Thanks again to Brian for answering the questions. You can read more of his work at Football Outsiders, and on his personal site BCF Toys.