Upon Further Review: An Empirical Investigation of Voter Bias in the Coaches’ Poll in College Football
### Abstract
#### Purpose
The popularity of NCAA football continues to rise at an exponential rate. As revenues increase, the difference between a BCS bowl berth and a non-BCS bowl berth can be millions of dollars. Thus, the process of how schools are selected to play in a BCS bowl game is very important. In this paper, we analyze one of the components of the BCS ranking system: the Coaches’ Poll.
#### Methods
Data from the final regular season Coaches’ Poll from 2005 through 2010 were analyzed in order to explore whether coaches were biased in their voting in three different areas: voting for their own team, voting for teams in their conference and voting for teams from Non-Automatically Qualifying (N-AQ) conferences.
#### Results
Through analyzing a Coach’s Difference Score (CDS), we found that coaches had a positive bias towards their own team. That is, they vote their own team higher than their peers. We also discovered that coaches tend to vote schools from their own conference higher than do coaches from outside that conference. Finally, we concluded that coaches from the six Automatically Qualifying (AQ) conferences were biased against schools from the smaller N-AQ conferences.
#### Conclusions
After discussing potential reasons why all these biases occur, several questions for future researchers to explore are put forth. Then, we make several suggestions to improve the voting process in order to make it as objective as possible.
**Key Words:** College Football Coaches’ Poll, Voting Bias, BCS Implications
### Introduction
Every year in college football, a debate occurs about which team should be ranked higher than another, and 2010 was no different. With three teams finishing their regular seasons undefeated, it was up to the Bowl Championship Series (BCS) rankings to determine which two teams would play for the national championship. While Auburn defeated Oregon in Glendale, Arizona, on January 10, 2011, and was crowned champion, fans over a thousand miles away in Fort Worth, Texas were left to wonder, “Could TCU have beaten Auburn?” Thus, the scrutiny of the BCS continues.
The BCS system was started in 1998 as a way to bring the top-two ranked teams face to face in a bowl game to determine a national champion (3). Prior to the BCS, the bowls tried to match number one versus number two, but with guaranteed conference tie-ins, such as that of the Pac 10 and the Big 10 to the Rose Bowl, it was not always possible. When the Rose Bowl relented, the BCS was born. According to the official BCS website, “The BCS is managed by the commissioners of the 11 NCAA Football Bowl Subdivision (“FBS”) (formerly Division I-A) conferences, the director of athletics at the University of Notre Dame, and representatives of the bowl organizations. The conferences are the Atlantic Coast, Big East, Big Ten, Big 12, Conference USA, Mid-American, Mountain West, Sun Belt, Pacific-10, Southeastern and Western Athletic” (3).
As of 2005, the BCS standings are determined by averaging three different rankings: the Harris Poll, computer rankings and the Coaches’ Poll. The Harris Poll is run by a marketing research firm, Harris Interactive, and is “comprised of 114 former college football players, coaches, administrators and current and former members of the media…randomly selected from among more than 300 nominations” (10) from the FBS. The final computer ranking used is an average of the rankings from six different firms/individuals that mathematically calculate a team’s ranking based on wins, strength of schedule, etc. (3). The Coaches’ Poll is run by *USA Today* and the American Football Coaches Association (AFCA) and is approximately 60 coaches–50% of the coaches in each conference are randomly selected to vote (18).
This research explores one component of the BCS: the Coaches’ Poll. In particular, we investigate to what extent coaches have been biased in their voting. Bias, as defined herein, is considered to be present when a coach ranks a team significantly different than the other voting coaches in the poll. Why is this important? With teams often being separated by a few tenths of a point in the BCS standings, ensuring the integrity of the rankings is critical. The BCS standings can determine a team’s bowl game and/or a coach’s bonus. For example, Iowa coach Kirk Ferentz received a $225,000 bonus for finishing in the Top 10 BCS rankings in 2009, and another $175,000 bonus for playing in the BCS Orange Bowl that season (8). In 2010, the BCS bowl payout was 17 million dollars (6) with the non-BCS bowl payout being much less (e.g., The 2010 Capital One Bowl had the highest non-BCS Bowl payout of 4.25 million dollars (6)). So, biased decisions may not only affect the coaches, who make these decisions, but other coaches and universities, as well.
Prior to 2005, the coaches’ votes were not made public. Then, in response to added pressure for transparency, a vote by all FBS coaches made the final regular season Coaches’ Poll public by agreeing to have the ballots published in *USA Today* (7). However, the decision was not unanimous. According to Texas coach Mack Brown, who was initially not in favor of making the votes public, “It can put coaches in a difficult situation” (7). How did the first year of public voting go? According to Sports Illustrated writer, Stewart Mandel, it was “the equivalent of a high school student-council election” with “Oregon coach Mike Bellotti, his team about to be squeezed out of the BCS by Notre Dame, placing the Ducks fourth and the Irish ninth,” and “Arkansas coach Houston Nutt ranking SEC rival Auburn third and Big East champion West Virginia … nowhere.” (14). Even Coach Steve Spurrier of the University of South Carolina has questioned the validity of the Coaches’ Poll remarking, “I guess we vote ’cause college football is still without a playoff system. I really believe most coaches do not know a whole lot about the other teams” (9).
With increasing scrutiny of the coaches’ voting patterns, the AFCA hired The Gallup Organization in early 2009 to analyze the coaches’ voting and make recommendations. “The perception is that there’s a huge bias, and we’ve never really found that,” claimed former Baylor coach and current AFCA Director Grant Teaff (2). One of Gallup’s key recommendations was to make the coaches’ final regular season votes private. However, after seeing the response to a *USA Today* poll of over 4,000 readers that found 79% of fans felt the coaches’ final regular season votes should remain public as “it is important they are accountable,” (20) the AFCA put the decision to a vote of all FBS head coaches, and the results indicated that the final regular season votes should remain transparent. Consequently, the AFCA changed their mind and kept the final regular season votes public in 2010 (1).
Even with the continued visibility of the voting, one thing remained consistent in 2010: scrutiny. For example, in the final vote of 2010, one coach returned his ballot with TCU ranked number one, which is against the AFCA rules (the AFCA instructs every coach to list the winner of the BCS National Championship Game as the top ranked team) and two other coaches in the poll failed to turn in their ballots at all (18). While the final votes of each coach are not made public, these types of mishaps still fuel the debate: Should coaches have a part in the BCS rankings?
Previous researchers have discovered that individuals can be biased towards others in society (11, 12), and that people can also be biased when voting (16, 17). Specifically, researchers have examined voter bias in college football polls. For instance, Coleman et al. (5) concluded that voters in the 2007 Associated Press college football poll were biased in a number of different ways, including voter bias toward teams in their home state. In another study, Campbell et al. (4) discovered that “the more often a team is televised, relative to the total number of own- and opponent-televised games, the greater the change in the number of AP votes that team receives,” (p. 426) when they analyzed the AP votes from the 2003 and 2004 college football seasons. A study by Paul et al. (15) also examined AP voting bias, but included coaches’ voting as well. Their research looked at both of these polls from the 2003 season, and they determined that the spread or betting line on a game is “shown to have a positive and highly significant effect on votes in both polls. A team that covers the point spread will receive an increase in votes in both polls. A team that wins, but does not cover the point spread, will lose votes” (15, p. 412). In 2010, Witte and Mirabile (21) extended the literature by examining several seasons of Coaches’ Poll data, and they concluded that voters tended to “over-assess teams who play in certain Bowl Championship Series (BCS) conferences relative to non-BCS conferences” (p. 443).
While research on the voting bias in the college football polls exists, few researchers have investigated the bias in the Coaches’ Poll to any great depth. Hence the purpose of this research is to determine if college football coaches are biased when they vote and, if they are, what kind of biases they hold. Specifically, we look at three areas of potential bias put forth in the following null and alternate hypotheses:
> H1o: Own-School Bias – Coaches do NOT rank their own teams significantly different than other coaches voting.
>
> H1a: Own-School Bias – Coaches do rank their own teams significantly different than the other coaches voting.
>
> H2o: Own-Conference Bias – Coaches do NOT rank teams within their conference significantly different than coaches from outside the conference vote those same teams.
>
> H2a: Own-Conference Bias – Coaches do rank teams within their conference significantly different than coaches from outside the conference vote those same teams.
>
> H3o: N-AQ Bias – Coaches from schools in the AQ conferences do NOT rank N-AQ teams significantly different than the N-AQ coaches voting.
>
> H3a: N-AQ Bias – Coaches from schools in the AQ conferences rank N-AQ teams significantly different than the N-AQ coaches do.
Combining the three hypotheses, a model for Coaches’ Voting Bias is shown in Figure 1.
The first hypothesis investigates whether coaches can be objective when ranking their own teams. Do coaches rank their own teams about the same as other coaches rank that team, or, do coaches tend to over-estimate their own team’s ranking? The second hypothesis explores whether coaches rank teams in their own conference impartially. Many times a team’s quality of wins and losses can impact the perception of how good they are, and if a coach makes teams in their conference look superior to those from other conferences, perceptions of the strength of their own team may increase. Our final hypothesis examines what is commonly called “big school bias.” Namely, do coaches from the six traditional power conferences that have automatic qualification tie-ins with the BCS (AQ teams) tend to underestimate the strength of teams from the smaller conferences, the winners of which do not automatically qualify for a BCS bowl (N-AQ teams)?
### Methodology
The sample for this research was the final regular season coaches’ ballots for the 2005 through the 2010 college football seasons published in the *USA Today Coaches’ Poll*. In each of these years, a coach who is selected to vote ranks his top 25 teams by awarding 25 points to his top ranked team, 24 points to his second ranked team and decreasing in a similar manner until the 25th ranked team is awarded a single point. Appendix A lists the various coaches and the years during which each has been a member of the *USA Today Coaches’ Poll*. Table 1 aggregates the data by conference.
Because the number of coaches who vote each year varies slightly, a simple linear transformation of the total point system was employed herein by calculating a voter’s “difference score,” which is the average number of points a team received subtracted from the points that the voter awarded them. For instance, in 2008, the #20 Northwestern Wildcats received a total of 334 points and 61 coaches voted in that poll. So, Northwestern’s average points per coach is calculated as 334/61 = 5.475. In that poll, Coach Bret Bielema of Wisconsin gave Northwestern 8 points. Thus, his difference score would be 8–5.475 = 2.525. On the other hand, Coach Art Briles of Houston gave Northwestern only 4 points that year resulting in a difference score of 4 – 5.475 = -1.475.
In general, a positive difference score suggests that a coach ranked a team higher than that team’s average score, while a negative difference score indicates that a coach ranked a team lower than its average score. A small difference score represents a case where a coach has ranked a team very close to the average ranking of his peers. In contrast, a large difference score would suggest a coach disagreed with his peers about where a team should be ranked. The total of the 25 difference scores for each individual coach will sum out to zero each year as every time a coach votes a team higher than his peers, he must vote another team (or a combination of teams) lower than his peers. Likewise, when all the coaches’ difference scores for a single team are summed, there will be a difference score of zero (i.e., for every coach that votes a team higher than their final average, there must be a coach, or combination of coaches, that votes that team lower). Thus, the key unit of analysis in this study is a term we have labeled the Coach’s Difference Score or CDS.
One thing to note about the Coaches’ Poll is just how few votes can separate teams. Roughly seventeen percent of the point differences between two contiguous positions in the poll were determined by fifteen or fewer points. In fact, in fifteen occurrences, which is an average of 2.5 per year, less than six total points separated two teams, including in 2008 where a single point separated #1 Oklahoma (1,482 points) and #2 Florida (1,481 points).
### Results
In order to test Hypothesis 1, which explores whether there are any discernible patterns in how coaches rank their own schools, we employed a simple t-test. If there are no significant biases (as a collection), coaches that vote on their own teams will have a mean CDS score of zero (i.e., for every coach that ranks his team higher than his peers, a corresponding coach would rank his team lower than his peers). However, if coaches tend to error consistently on one side or the other from their peers, then the difference score for those coaches will not be equal to zero. We tested each of the six years individually and collectively. The results are summarized below in Table 2.
As illustrated in Table 2, in all six years, the CDS, which ranged from a low of 1.61 (in 2008) to a high of 3.12 (in 2007), were significantly different than zero at a p < .01 level. This result leads us to reject the null hypothesis that there is no bias in the way coaches vote their own school. The result indicates that coaches do tend to rank their own teams significantly higher than do other coaches. Over the entire sample, coaches, on average, ranked their own team 2.32 positions higher than did their peers. We explored this result further by performing an ANOVA test across the six periods to see if any one year’s bias was significantly higher or lower than the other years. The ANOVA result was not significant (F = 1.083, p = .373) leading us to conclude that there is no statistical evidence to suggest that the bias changes from one year to the next. While this result might seem trivial on the surface, it tells us that no matter how much the composition of the voting group changes (e.g., only eight of the sixty-two coaches that voted in 2005 were still voting in 2010), the coaches vote in a fairly homogeneous way when it comes to ranking their own teams.
In order to test Hypothesis 2 for within conference bias, we assessed the CDS for each voting coach, with regard to their respective conference members. To control for own-school bias, we did not include a coach’s own team in the analysis. We tested for this own-conference bias for individual seasons, as well as, collectively, for the entire span of years examined. We employed the same set-up and methodology (i.e., a t-test) that we used to test H1. Table 3 reveals the results of these t-tests.
All of the t-test results were statistically significant (p < .001) leading us to reject the null hypothesis. This result suggests that coaches do rank their own conference members higher than do coaches outside the conference. While the CDS overall mean of 1.19 might seem small, keep in mind that some conferences have as many as seven voting members, and others as few as three, which could lead to an average favorable bias of nearly 5 points (1.19 * (7 – 3)) for the teams in a conference with seven voting members.
To explore this bias further, we conducted two additional ANOVA tests. The first to discover whether the mean CDS had changed across the six years and the second to determine if any one conference’s coaches have a higher CDS than those of other conferences. With an F-statistic of 4.286, the first ANOVA test was significant at the p < .001 level. A post-hoc Tukey test (p < .05) indicated that own-conference bias was significantly higher in 2007 (with a mean of 1.95), than in each of the other years. This result might be explained by noting that, in 2007, there were only two teams from the traditional AQ conferences, Ohio State and Kansas, which had one loss or less, while a total of ten teams had two losses, potentially making it very difficult to sort out what schools rounded out the Top 10. As a result, coaches ranked the schools with which they were familiar (their own-conference schools) higher than other schools. That year was the only one within our sample range that had such a grouping of teams with similar records. Sixteen different teams received top 10 votes that year, which was the largest amount for any year.
We then turned our attention to analyzing own-conference bias broken down by conference membership. Table 4 gives the descriptive statistics for each conference. The ANOVA analysis suggests that we cannot accept the null hypothesis of equal bias across conferences (F = 4.286, p < .001). Post-hoc Tukey comparisons reveal that the own-conference bias of voters from the WAC was significantly greater than the bias from coaches in the ACC, Big 10, Big 12, C-USA, MAC, Pac 10 and SEC (p < .01). The primary beneficiaries of this effect were Hawaii in 2007, with the four WAC coaches voting Hawaii an average of 5.27 positions higher than non-WAC coaches, and Nevada in 2010, with the four WAC coaches voting Nevada an average of 5.4 positions higher than the non-WAC coaches.
Our final hypothesis, H3, investigates whether bias exists in the way coaches from AQ conferences, whose champions automatically qualify for a BCS spot, vote versus the way coaches from conferences that do not have automatic tie-ins, referred to as N-AQ conferences, vote. The six AQ conferences include: ACC, Big 10, Big 12, Big East, SEC and Pac 10. The remaining five conferences are categorized as N-AQ: C-USA, MAC, MWC, Sun-Belt and WAC. Ultimately, we are testing what many journalists call “big school bias”–whether or not coaches from the six AQ conferences are biased against the smaller N-AQ schools. To test for this bias, we assessed how coaches from the AQ schools ranked the N-AQ schools, compared to how coaches from the N-AQ schools ranked the other N-AQ teams. If there is no bias present, the means of the CDS of the two groups of coaches would be equal to each other. In order to control for the previous bias that we have demonstrated, we removed how an N-AQ coach votes on his own team and teams within his conference. For example, for Gary Patterson, the head coach of Texas Christian University (TCU), we analyzed his voting record for all the schools from C-USA, MAC, Sun-Belt and WAC, but did not include his voting record on schools from the MWC, the conference that Patterson’s TCU team played in during this time period, or his voting record for TCU. We performed this test on each of the six years, individually, as well as collectively. The results are presented below in Table 5.
In five of the six years, there was a statistically significant bias (p < .05). The largest amount of bias occurred in 2007, when AQ coaches ranked N-AQ teams an average of 1.92 spots below the positions assigned them by N-AQ coaches. The only year without bias was in 2009. An investigation of this year showed a couple of possible explanations. First, in two of the three previous years, N-AQ teams had significant BCS bowl wins. For example, after the 2006 regular season, Boise State defeated, then #8, Oklahoma in the Fiesta Bowl, and after the 2008 regular season, Utah beat, then #4, Alabama in the Sugar Bowl. Second, during the first few weeks of the 2009 season, when teams generally play out-of-conference games, several teams from N-AQ conferences had wins over good teams from AQ conferences: TCU beat a Clemson team that would win nine games and go on to win their division in the ACC, Boise State beat the #16 ranked Oregon Ducks, a team that won the Pac 10 and went on to play in the Rose Bowl, and BYU beat then #3 ranked Oklahoma. These high profile wins may have played a significant role in reducing the bias against N-AQ teams.
When the six-year period is looked at collectively, the AQ coaches ranked the N-AQ teams 0.80 places lower than the N-AQ coaches ranked those same teams (p < .001). While this result might, at first, seem like a small margin, recall that as Table 1 shows, in an average year, there are 10.5 more AQ coaches than N-AQ coaches voting–and the resulting bias can thus have a significant effect on the overall point totals and rankings.
### Discussion
This study demonstrated that coaches who are selected to vote in the *USA Today Coaches’ Poll* are subject to at least three different kinds of bias. First, coaches are biased toward their own teams. On average, coaches rank their own school 2.32 positions higher than do their peers. Indeed, the effect is so prevalent that 92.1% (or 82 out of 89) of coaches whose school finished in the top 25 ranked their own school higher than the average of the other coaches. In two years, 2007 and 2010, every single coach ranked their team higher than its final position. Twenty-eight of the 89 coaches (31.5%) ranked their school at least three positions higher than the average of their peers, and 11 of 89 (12.4%) voted their team at least five positions higher. One coach even voted his team 9.71 positions higher than the average of the other coaches’ rankings. In contrast, the maximum amount a coach ranked his team lower than did his peers was 1.18 positions. This bias seems to be a natural phenomenon. Social psychologists have extensively studied the concept of illusory superiority (11, 12), which describes how an individual views him or herself as above average, in comparison to their peers.
The second form of coaches’ bias found was bias toward their own conference. Over the six-year period from 2005 to 2010, coaches voted their conference members 1.19 positions higher than their average ranking. Representative examples of this type of bias are worth discussing. For example, in 2009, Mississippi received 87.5% of their total points from SEC conference coaches, who made up less than 12% of the voters. Similarly, in 2008, the Iowa Hawkeyes received 62% of their votes from Big Ten coaches, who made up less than 10% of the voting population. Further evidence of this effect is apparent when you compare two teams that finished very close in the rankings. For example, in 2009, Oregon and Ohio State were #7 and #8, respectively, in the poll, and they were separated by only 19 points. All five PAC 10 coaches voted Oregon ahead of Ohio State, while four of the five Big 10 coaches voted Ohio State ahead of Oregon (Interestingly, Jim Tressel, the coach of the Buckeyes was the only Big 10 coach to put Oregon ahead of Ohio State). A similar phenomenon happened in 2010 when Oklahoma and Arkansas were tied for the #8 ranking. Six of the seven Big 12 coaches voted Oklahoma higher, while five of the six SEC coaches voted Arkansas higher.
When comparing the bias across conferences, the WAC was found to be the most biased voting their own teams on average 2.97 places higher. Perhaps, this is due to the WAC coaches trying to overcompensate for the perceived bias that other voting coaches have against this N-AQ conference. In 2007, Hawaii went undefeated, yet finished #10 in the overall rankings behind seven teams from AQ conferences that had two losses, in large part due to the voting of AQ coaches. A similar result occurred in 2006 when Boise State went undefeated and finished #9 in the rankings behind three AQ teams with two losses.
The third form of bias discovered in this research was that of coaches toward N-AQ conference teams. Looking more closely at the numbers shows that while this bias does exist, it seems to be diminishing over time. The effect was at its highest in 2007 when AQ coaches ranked N-AQ teams 1.92 positions lower than did the N-AQ coaches. As previously mentioned, there was no significant difference in the CDS with regard to N-AQ teams in 2009, and this might be due to some significant wins N-AQ schools have had over AQ schools in recent years. Moreover, it will be interesting to see if the N-AQ bias is further reduced in the 2011 season after TCU’s defeat of Big 10 Champion Wisconsin in the 2011 Rose Bowl, which led the Horned Frogs to a #2 ranking in the final standings–the highest for any N-AQ team during the period we surveyed. The Rose Bowl win brought the N-AQ teams to a very impressive five wins to two losses in their BCS Bowl appearances. Their 71.4% winning percentage is higher than that of any of the AQ conferences.
There are a number of limitations to this study. For one, the data was limited to the final regular season *USA Today Coaches’ Poll*, as that is the only data made public. If more data is provided in the coming years, future researchers will be able to investigate whether coaches’ bias varies throughout the season. Greater availability of data will also allow researchers to use more sophisticated time series data analysis techniques, such as logistic regression. Secondly, the sample sizes for some of our subgroups was rather small. For example, because only one MAC team made the top 25 rankings, only five MAC coaches’ votes were used to assess the MAC’s own-conference bias. As more data is collected over time, and possibly more MAC teams make the top 25 poll, future researchers can replicate this study on a larger sample.
There are many fruitful areas remaining for future researchers to continue exploring bias in the Coaches’ Poll. Researchers can analyze where bias is the strongest – are coaches most biased when ranking teams in the top third of the standings, the middle third, or the bottom third, etc.? Previous research has shown that TV exposure impacts how media members vote (4); future researchers can determine whether it has any effect on the way coaches vote. The 2011 and 2012 seasons will see a shift in conference membership. Future researchers can attempt to discover what effect this has on bias. One particular study could examine Utah and TCU–two N-AQ teams who are moving to AQ conferences, the PAC 10 and the Big East, respectively. Will AQ coaches now see these teams as AQ teams or will they continue to see them, and thus penalize them, as being N-AQ teams? Finally, a last ripe area for exploration involves gathering the coaches’ opinions on the subject. Do coaches think that they themselves are biased? Do they think their colleagues are biased? And if coaches do think other coaches are biased, do they try to compensate for it?
### Conclusion
One thing is certain. The current BCS system has flaws, which leads to frequent fan and media criticism. While every system, including a playoff, has advantages and disadvantages, the BCS should continually evaluate itself in an effort to make improvements. If it does not, the scrutiny will only increase over time. For example, Wetzel, Peter and Passan’s 2010 book, Death to the BCS, has garnered much attention in the media. The authors refer to the BCS as an “ocean of corruption: sophisticated scams, mind-numbing waste, and naked political deals” (19). In fact, after reading this book, Dallas Mavericks’ Owner, Mark Cuban, formed his own company in late 2010, in an effort to create a play-off system that would challenge the BCS in the future (13).
In our opinion, BCS officials should consider making several changes. For one, they should use an email-based ballot to make it easier for coaches to vote, instead of the antiquated phone-in ballot system currently used. Moreover, they should not require all ballots to be turned in so soon after the weekend games. Coaches simply do not have enough time to thoroughly analyze all of the teams within 24 hours of finishing their games. The BCS could consider moving the voting deadline to later in the week. Secondly, coaches should not be allowed to vote for their own team–if this rule were implemented own-school bias would be eliminated. These last two recommendations are not new; both were made by Gallup when they were hired by the AFCA to examine the Coaches’ Poll in 2009 (20). While the AFCA decided not to implement them, we feel that given the dollars involved in the BCS rankings, these would be easy improvements to the system. Lastly, why not let every FBS coach vote? Normally, a sample or sub-set of the population is used due to the expense of a census. But, in this case, the population is not very large, with only 120 FBS coaches, nor is the process very complex or time-consuming. By allowing all coaches to vote, it may help reduce the amount of own-school benefit that about half of the teams are currently receiving. Moreover, as most conferences are roughly the same size, this measure would also help reduce the disparity in the number of voters from each conference, thus minimizing the effect of own-conference bias.
Overall, our research has highlighted some important issues with the Coaches’ Poll. Bias in voting has occurred in the political arena in many different forms (16) and researchers have discovered that the amount of information voters possess can impact voting preferences (17). Perhaps the AFCA could do the same with voting coaches using our research results. If the coaches were to see how much bias occurs and the different forms of bias that are present in the voting, they may be encouraged to vote more objectively.
Under the current system, we found three different forms of bias present in the *USA Today Coaches’ Poll*: bias toward own-team, bias toward own-conference and bias toward teams in N-AQ conferences. These are significant findings as the Coaches’ Poll is an important part of the BCS standings that accounts for one-third of the BCS formula; a formula that, in turn, can mean the difference between a team going to a bowl with a payout of $17 million versus a fraction of that amount.
### Application In Sport
This research has several applications for those in sport. For one, BCS and college football administrators now have a better understanding of the biases that coaches employ (intentionally or unintentionally) when voting. Hopefully, some changes, as suggested in our Conclusion section, can be made to improve the process. In addition, sport management researchers and students can continue to analyze the numbers in the future to investigate other forms and levels of bias, now that this study has provided a framework, namely the CDS, as a basis for voting comparisons.
### References
1. Anonymous (2009, November 6). AFCA to continue release of final regular season Coaches’ Poll ballots. Retrieved from <http://www.afca.com/ViewArticle.dbml?DB_OEM_ID=9300&ATCLID=204828450>
2. Associated Press. (2009, May 6). Coaches mull changes in football poll. Retrieved from <http://sports.espn.go.com/ncf/news/story?id=4147492>
3. BCSfootball.org. (n.d.). Retrieved from <http://www.bcsfootball.org>
4. Campbell, N. D., Rogers, T. M., & Finney, R. Z. (2007). Evidence of television exposure effects in AP top 25 college football rankings. Journal of Sports Economics, 8 (4), 425-434.
5. Coleman, B. J., Gallo, A., Mason, P. M., & Steagall, J. W. (2010). Voter bias in the associated press college football poll. Journal of Sports Economics, 11 (4), 397-417.
6. CollegeFootballPoll.com. Retrieved from <http://www.collegefootballpoll.com/2010_archive_bowls.html>
7. Collins, M. (2009, May 31). College football coaches choose darkness: Coaches Poll changes in 2010. Bleacher Report. Retrieved from <http://bleacherreport.com/articles/189684-college-football-coaches-choose-darkness-coaches-poll-changes-in-2010>
8. Dochterman, S. (2010, January 8). Orange Bowl visit, No. 7 final ranking worth millions in bonuses, raises to football program. The Gazette. Retrieved from <http://thegazette.com/2010/01/08/orange-bowl-win-worth-millions-in-bonuses-raises-to-football-program/>
9. Dodd, D. (2009, July 24). Spurrier’s fraudulent SEC vote makes fraud of coaches poll, too. CBS Sports. Retrieved from <http://www.cbssports.com/collegefootball/story/11982516>
10. Harris Interactive. (n.d.). Retrieved from <http://www.harrisinteractive.com>: <http://www.harrisinteractive.com/vault/HI-BCS-HICFP-FAQs-2010-10-15.pdf>
11. Hoorens, V. (1995). Self-favoring biases, self-presentation, and the self-other asymmetry in social comparison. Journal of Personality, 63 (4), 793-817.
12. Hornsey, M. J. (2003). Linking superiority bias in the interpersonal and intergroup domains. The Journal of Social Psychology, 143 (4), 479-491.
13. MacMahon, T. (2010, December 16). Mark Cuban exploring BCS alternative. ESPN Dallas. Retrieved from <http://sports.espn.go.com/dallas/nba/news/story?id=5924399>
14. Mandel, S. (2005, December 7). The real BCS controversy. Want evidence of bias? Just look at coaches’ votes. Sports Illustrated CNN. Retrieved from <http://sportsillustrated.cnn.com/2005/writers/stewart_mandel/12/07/mailbag/index.html>
15. Paul, R. J., Weinbach, A. P., & Coate, P. (2007). Expectations and voting in the NCAA football polls: The wisdom of point spread markets. Journal of Sports Economics, 8 (4), 412-424.
16. Sigelman, C. K., Sigelman, L., Thomas, D. B., & Ribich, F. D. (1986). Gender, physical attractiveness, and electability: An experimental investigation of voter biases. Journal of Applied Social Psychology, 16 (3), 229-248.
17. Taylor, C. R., & Yildirim, H. (2009, June). Public information and electoral bias. Retrieved from <http://econ.duke.edu/~yildirh/elections.pdf>
18. *USA Today*. (2011, January 11). Top 25 Coaches’ Poll. Retrieved from <http://www.usatoday.com/sports/college/football/usatpoll.htm>
19. Wetzel, D., Peter, J. & Passan, J. (2010). Death to the BCS: The Definitive Case Against the Bowl Championship Series. Retrieved from <http://www.deathtothebcs.com>.
20. Whiteside, K. (2009, May 28). Football coaches to keep poll ballots secret starting in 2010. *USA Today*. Retrieved from: <http://www.usatoday.com/sports/college/football/2009-05-27-coaches-poll-votes_N.htm?loc=interstitialskip>
21. Witte, M. D., & Mirabile, M. P. (2010). Not so fast, my friend: Biases in college football polls. Journal of Sports Economics, 11 (4), 443-455.
### Figures
#### Figure 1
A Model of Coaches’ Bias in Voting
![Figure 1](/files/volume-15/458/figure-1.jpg “A Model of Coaches’ Bias in Voting”)
### Tables
#### Table 1
Voter Composition by Conference
![Table 1](/files/volume-15/458/table-1.png “Voter Composition by Conference”)
#### Table 2
T-Test Results of Own-Team Bias
Year | Mean Difference | Std. Error | df | t-stat | Significance |
---|---|---|---|---|---|
2005 | 1.69 | .40 | 15 | 4.23 | .001 |
2006 | 2.51 | .47 | 18 | 5.29 | .000 |
2007 | 3.12 | .57 | 13 | 5.49 | .000 |
2008 | 1.61 | .47 | 12 | 3.43 | .005 |
2009 | 2.63 | .83 | 11 | 3.19 | .009 |
2010 | 2.38 | .52 | 16 | 4.59 | .000 |
All Years | 2.32 | .22 | 90 | 10.57 | .000 |
#### Table 3
T-Test Results of Own-Conference Bias
Year | Mean Difference | Std. Error | df | t-stat | Significance |
---|---|---|---|---|---|
2005 | 1.03 | .18 | 129 | 5.75 | .000 |
2006 | 0.93 | .17 | 121 | 5.59 | .000 |
2007 | 1.95 | .21 | 130 | 9.43 | .000 |
2008 | 1.09 | .19 | 125 | 5.69 | .000 |
2009 | 1.09 | .21 | 120 | 5.10 | .000 |
2010 | 0.98 | .16 | 116 | 6.25 | .000 |
All Years | 1.19 | .08 | 746 | 15.32 | .000 |
#### Table 4
Descriptive Statistics for Own-Conference Bias
Conference | N | Mean Difference | Std. Error |
---|---|---|---|
ACC | 111 | 1.02 | .17 |
Big 10 | 119 | 1.20 | .17 |
Big 12 | 128 | 0.68 | .17 |
Big East | 52 | 1.61 | .26 |
C-USA | 10 | 0.49 | .45 |
MAC | 5 | -1.19 | .93 |
MWC | 42 | 1.48 | .30 |
Pac 10 | 79 | 1.29 | .29 |
SEC | 175 | 1.25 | .17 |
WAC | 26 | 2.97 | .48 |
#### Table 5
T-Test Results of AQ vs. N-AQ Bias
Year | AQ Mean | N-AQ Mean | difference | t-stat | significance |
---|---|---|---|---|---|
2005 | -0.41 | 0.45 | -0.85 | -1.95 | .056 |
2006 | -0.57 | 0.54 | -1.11 | -3.74 | .000 |
2007 | -1.05 | 0.87 | -1.92 | -3.88 | .000 |
2008 | -0.29 | 0.39 | -0.68 | -2.48 | .014 |
2009 | -0.34 | 0.01 | -0.35 | -1.36 | .176 |
2010 | -0.35 | 0.20 | -0.55 | -2.75 | .006 |
All Years | -0.46 | 0.34 | -0.80 | -6.38 | .000 |
### Appendices
#### Appendix A
Coach Composition of Coaches’ Poll
![Appendix A – Part 1](/files/volume-15/458/appendix-a-part1.png “Coach Composition of Coaches’ Poll”)
![Appendix A – Part 2](/files/volume-15/458/appendix-a-part2.png “Coach Composition of Coaches’ Poll”)
![Appendix A – Part 3](/files/volume-15/458/appendix-a-part3.png “Coach Composition of Coaches’ Poll”)
#### Appendix B
Team Rankings in the Coaches’ Polls Analyzed
![Appendix B](/files/volume-15/458/appendix-b.png “Team Rankings in the Coaches’ Polls Analyzed”)
### Authors
#### Michael Stodnick, Ph.D.
Assistant Professor, College of Business
University of Dallas
#### Scott Wysong, Ph.D.
Associate Professor, College of Business
University of Dallas
### Corresponding Author
Scott Wysong, Ph.D.
Associate Professor, College of Business
University of Dallas
1845 E. Northgate Dr.
Irving, TX 75062
<swysong@gsm.udallas.edu>
972-721-5007