Brian Bennett posted an interesting item on his blog several days back regarding an anonymous ESPN Magazine poll of Big East players.
Pitt's Dave Wannstedt was named best coach by the same 55.6 percent result."I don't know what kind of game-planner he is," says a rival lineman. "But I do know I would like playing for him. He coaches with passion."
Rutgers Greg Schiano received 77.8 percent of the vote when players were asked which coach they'd least like to play for. I don't get that. Is it because Schiano is viewed as a strict disciplinarian? His players almost all graduate, after all.
I intuitively agreed with Bennett's guess. Wannstedt is a players' coach. Schiano has such a rep as a hardass that Rutgers fans openly joke about it.
Something seemed off about the numbers though. Specifically, think for a second about the figures 55.6% and 77.8%. What's 55.6? Five divided by nine. 77.8% is seven divided by nine. I don't think ESPN would go to the trouble of contacting 900 players, so a safe assumption is that the sample size was a smaller multiple of nine. There's also a worry about whether or not the players were evenly distributed across conference schools, but let's be charitable and leave that issue aside.
Today I received my copy of ESPN Magazine in the mail. (Side note: while I don't really care for the bait and switch tactics used to market ESPN Insider/Magazine, I actually think it is a pretty good magazine and not at all worthy of its generally poor reputation). The survey Bennett quoted is on page 84. According to the player survey, 55.6% of players predict Pitt will win the conference. Fair enough. 22.2% predict Cincinnati, and West Virginia and USF both come in at 11.1%, with no one voting for Rutgers, UConn, Syracuse, or Louisville.
USF at 11.1% and tied with West Virginia, along with zero votes for Rutgers or UConn seems more than a little implausible. USF has finished sixth in conference play the past two years, and that's where the preseason media poll predicted them to finish again. Conceivably one out of nine voters could have voted for USF to win the conference (especially if a Bull was allowed to vote for his own team), but there's no way that ten out of ninety voters voted for USF while zero voted for Rutgers or UConn. Even two out of eighteen would be a major reach.
Indeed, flipping to the front of the preview, it's clearly stated that the sample size for the entire survey (i.e., the sum total from each conference) was 135 players. Therefore, I'm going to go ahead and say with confidence that the entire Big East poll had only nine respondents, and the other conferences won't fare much better. ESPN really was grasping at straws in producing these articles. They undoubtedly have the resources to interview more players and produce conference surveys of value. ESPN's conference-specific bloggers shouldn't have linked to the pieces considering this gigantic caveat renders them next to useless.
So no, 77.8% of the players in the Big East didn't name Greg Schiano as the conference coach they'd least like to play for, irascible workaholic he may be. 77.8% of poll voters did, or more likely, seven of nine anonymous Big East players did.
Note: for users not familiar with the various statistical concepts discussed above, I recommend reading Wikipedia's article on the concept of margin of error. If it's still not clear, here's a brief explanation: let's say you're wondering how many people in New Jersey like Rutgers football. If you ask your parents, and dad says yes, and mom says no, does that mean 50% of New Jerseyans like Rutgers football?
No, because the sample is A) not representative, and B) so small that it has a huge margin of error. If you asked another two people you could get entirely different results. To account for random variation any poll has to survey a sufficient number of respondents. That drowns out any statistical noise and allows for making claims with any sort of confidence interval. Or at least, that's my recollection from taking Stat 101 several years ago.
Update: this post originally included a pretty dumb error that has since been corrected. The point still stands that the sample sizes for the conference surveys are way too small to be meaningful.