Authors: Jeffrey J. Fountain and Peter S. Finley
Corresponding Author:
Jeffrey J. Fountain
Carl DeSantis Building
3301 College Avenue
Fort Lauderdale, FL, 33314-7796
jeffjf@nova.edu
954-262-8129
Jeffrey Fountain, Ph.D. and Peter Finley, Ph.D., are Associate Professors of Sport and Recreation Management at the H. Wayne Huizenga College of Business and Entrepreneurship at Nova Southeastern University.
Roster Survival: An Exploratory Study of College Football Recruits in the Power Five Conferences
ABSTRACT
This study explored the retention of football players among the Power Five conference universities between 2002 and 2013. A new metric was created to evaluate roster retention beginning at the time players committed to a university as opposed to after matriculation, as is used in more common graduation-rate metrics. Results suggested a large disparity among universities between those that maintain recruits through four or more years of college football and those that have much higher roster turnover rates as well as high rates of commits never appearing on even a single roster. Additionally, the results showed the average number of games football players appeared in during the 12-year time period. The new metric developed and the results of the study are important for various stakeholders, including providing additional information for prospective college football players during the recruiting process. The metric could also provide additional data for athletic department officials when analyzing their own roster management practices as well as the past roster management practices of potential coaches. The NCAA could also benefit from this new metric as it adds information to the conversation about athletes in higher education and it provides a roster based viewpoint on the sheer number of athletes that have moved through “Big Time” college football over the years.
Keywords: Roster Management, College Football, High School Recruits, NCAA, Retention, Power Five.
INTRODUCTION
The recruiting process ends for most recruits on National Signing Day, a day recruits can first sign a National Letter of Intent (NLI) with their chosen university. The NLI is a document that “binds a student-athlete to attend the school with which he/she signed” (18, p.238). Division I college football’s National Signing Day has virtually become a holiday of sorts for the millions of fans who are glued to television sets, recruiting websites, social media, and online discussion boards throughout the day (12). In light of the pomp and circumstance that surrounds the culmination of the commitment process as high school football players select their universities little attention is spent on the future of these recruits at their chosen universities (4).
The fans who tune in to see the new signees on National Signing Day may not have an appreciation for the realities college football players face, as the trope about “free education” is so loosely bandied about (29). Southall and Weiler describe the facade that college sports put on,
- The idyllic settings of most major universities add to a sense that “student-athletes” live in an ivy-covered, academic paradise. Such characterizations, represented during most – if not all – game broadcasts help impede fans’ ability to see big-time college sports’ systemic exploitation (29, p. 178).
Beamon (3) identifies a key driver of the exploitation of football players is the pressure to maintain team performance and increase revenue, by recruiting and enrolling superior athletes, further placing at odds the roles of student and athlete. Researchers have also shown that incentive-laden contracts of football coaches, often offer much greater rewards for winning games than for graduating players, which leads some to surmise that players are little more than replaceable parts (9). Fountain & Finley (12) found large turnover rates of football players at top FBS schools and applied the term “roster survival” to label the process of simply maintaining a roster position from year to year, as football teams replenish their rosters with new commits while remaining within the NCAA’s maximum allowable limit of scholarships.
A seminal work by Adler and Adler (1) showed that athletes’ optimism for their academic pursuits can quickly meet a harsh reality, as they become detached from the academic experience and abandon their initial aspirations, settling on an inferior quality of academic experience as they become consumed by their athletic identity. Recent research has focused on academic clustering as athletes in a variety of sports have found themselves clustered into only a few academic majors, often in an attempt to ensure eligibility and sometimes in no way reflecting the distribution of non-athletes across academic majors on the same campuses (6,10,11,19,23,24,26,27). Additionally, the distribution of football players across majors is even more limited at universities that are more highly selective in their admissions criteria (19). This type of behavior should not be unexpected, as other research has found athletic programs serve a variety of uses for their institutions and their commercialization contributes to revenue generation, increased visibility, student recruitment, and alumni support. Thus it is no surprise that the pressure to win is relentless and academics can take a back seat (8,31). Through interviews with former student-athletes, Beamon (3) determined that, in spite of earning a degree, most participants did not believe they had a positive collegiate experience or that their education was emphasized by their university.
In response to concerns about academic integrity in college athletics, the focus has been on graduation-based metrics. The federal government and NCAA each utilize different metrics to measure academic success. The federal government requires all universities that receive federal funds to report the Federal Graduation Rate (FGR) for all students, with completion defined as earning a degree within 150% of the normal time to complete, so a six-year window was established and applied to cohort groups of students entering at the beginning of each academic year (17). The NCAA created the Graduation Success Rate (GSR) as an alternative graduation-rate methodology, crediting institutions for in-coming transfers or midyear enrollees who graduate and excluding athletes that left the university in good academic standing (25). The NCAA also created the Academic Progress Rate (APR) to provide real-time feedback on the progress-toward-degree of athletes, based on the fact that retention and continued eligibility are essential for graduation (14).
The Drake Group, a group that advocates for academic integrity for all students, makes the case that the NCAA’s GSR and APR and any other measure that treats athletes as distinct from the general student population and fails to allow for direct comparison between athletes and non-athletes supports the exploitation of athletes (14). A common criticism of these metrics is that they are intended to shield athletes from comparison to the general student body as part of the NCAA’s marketing campaign, while masking the achievement gap between the groups (28). As Gurney and Southall (15) described in their critique,
- Intentionally or not, the NCAA’s APR and GSR metrics confuse the media, fans and the general public. Using the GSR and APR to tout graduation success and increased academic standards is undoubtedly savvy marketing and public relations, but these metrics are fundamentally nothing more than measures of how successful athletic departments are at keeping athletes eligible, and have increasingly fostered acts of academic dishonesty and devalued higher education in a frantic search for eligibility and retention points (para. 17).
While the aforementioned metrics examine persistence toward graduation and graduation rates, they fail to shed light on the issues of roster management and oversigning. Roster management is the practice of decreasing the numbers of current players on scholarship over the months between National Signing Day and the first day that rosters are finalized. Roster management is necessitated in large part due to the practice of oversigning commits. Bateman (2) defined oversigning as “when a school accepts more signed NLIs than it has student-athletes who are leaving the team before the next season due to graduation, early entry for the National Football League (NFL) Draft, medical reasons, or ineligibility” (p. 11). Staples (30) argued that the coaches that oversign have a competitive advantage because “the coaches who signed more players had a chance to erase their mistakes” (para. 19).
Historically it has been common for some teams to oversign and then need to remove as many as ten players from the scholarship ranks after signing day, by one means or another (16,30). While roster management can be seen as a numbers game, “its implications can be devastating and life-altering for a student-athlete,” (20, p. 1). This is particularly the case for players who have been on the team and are being pushed out, through various questionable means, to make way for incoming players who have been deemed more promising or fit positions of need. Among the means used by coaches to push players out are suggesting that a player will receive little playing time going forward and that, perhaps, he would benefit by transferring to another university. Another practice is claiming a player can’t be medically cleared to play and putting him on a medical hardship waiver, or kicking players off the team for violations of team rules which include minor violations for which a star player would be given a lighter penalty (13,16). Regarding medical waivers, there is some evidence that team doctors are influenced by the coaching staff and know which players are considered expendable (13).
Oversigning was once more prevalent in the Southeastern Conference (SEC). For example, between 2007 and 2011, Auburn signed an average of 30.2 commits per class and Ole Miss and Mississippi State each averaged 28 commits (5). The 2009 recruiting class at Ole Miss had 37 committed players sign and Coach Houston Nutt said, infamously, “There’s no rule that says we can’t sign 80. All I know is we have to have 25 ready to go in August,” (30, p. 1). Kansas State had commitments of 26, 30, 34 and 33 players between 2005 and 2008, totaling 123, meaning at least 38 players had to be managed in some way to fit under the 85-scholarship maximum (20). In 2011, the SEC passed a rule limiting signees to 25, down from 28 per incoming class. The NCAA adopted the rule shortly thereafter. Some conferences hold themselves to a higher standard. The Big 10, for example, adopted a rule in 2002 that limited its member universities to sign a maximum of three more commitments than available scholarship spots and to document how it then arrived at not more than 85 total scholarships when rosters are finalized (30).
PURPOSE OF THE STUDY
The purpose of this study was to explore and gain a better understanding of high school football recruits movement from making a commitment to a university’s football program to actually appearing on one and then subsequent rosters. The researchers sought to produce a roster-based retention metric as well as determine the average number of games in which the commits appeared. A term was needed to describe this new retention metric that tracked commits from a high school recruit’s commitment all the way through to their last appearance on a roster at that university. The term for this new metric was labeled the Athlete Commit Life Cycle (hereafter as ACLC). This differs from traditional academic metrics that begin with enrollment at the university by stepping back to track high school recruits from the time they make a final commitment to a university. By doing so this data can account for recruits who make a public commitment but subsequently never make it onto a roster at the university that recruited them.
The researchers took a longitudinal approach by reviewing the football rosters of all the universities in the Power Five conferences (ACC, Big 10, Big 12, Pac 12, and SEC) as of 2016. The study concluded with the 2016 football season which allowed the inclusion of twelve football recruiting classes, spanning from 2002 to 2013. This allowed the study to explore roster retention over a long period of time rather than just a snapshot of a single recruiting class.
The following research questions guided the study:
- Research Question 1: How many rosters did commits appear on during their ACLC?
- Research Question 2: Is there a difference in ACLCs between Power Five conferences and universities?
- Research Question 3: On average how many games did commits play in during their ACLC?
METHOD
This study focused on the sixty-four universities in the Power Five conferences as of 2016 and the football players who were published commits to one of the sixty-four universities during the years of 2002 to 2013. Three online sources were utilized to build large datasets that were used for the study. First, a Football Commit dataset was built utilizing the published commitment list for each Power Five university for each recruiting class from 2002 to 2013 from Rivals.com. The 2013 incoming class was used as the final recruiting class in the study because this allowed for tracking of all commits through the 2016 football season, ensuring a minimum of at least four college football rosters (seasons) for each recruit in the dataset. Rivals.com was selected because it was the original provider of recruitment information and is recognized as one of the “Big Four” websites in the recruitment-information industry (7).
The use of the word “commits” was used throughout the study because while a majority of the commits did successfully transition from commit to player (by appearing on at least one roster), some of the commits never appeared on a roster at all. To best represent both the commits as well as the commits that went on to become players, the term commits was applied to everyone in the study regardless if they transitioned into being a player on the roster or not.
A Football Roster dataset was then built utilizing the football-statistics section of the NCAA website. The researchers gathered the rosters that were produced by the NCAA for each Power Five football program for each year from 2002 to 2016. The NCAA statistics rosters also included the official number of games each player appeared in during each season. The Football Roster dataset was much larger than the Football Commit dataset because the NCAA statistics rosters included all players who ever appeared on the roster, not just recruited commits, but also all walk-ons and transferred-in players. The Football Roster dataset’s large size also reflects the multiple roster appearances by players. For example, a football player that played for four seasons would show up on four different NCAA statistics rosters. The NCAA has an eligibility timeline rule that states “you have five-calendar years in which to play four seasons of competition” (21, p.1). The NCAA also has various rules including redshirting and medical exemptions that when combined allow commits to extend the number of years in a program beyond five years. Therefore, a few players appear on a total of six NCAA statistics rosters.
In order to study the ACLC two subgroups in the datasets needed to be removed from the final analysis. The first subgroup was the commits that excelled on the football field during their first several years at the college level and made the decision to leave early (prior to appearing on a fourth roster) for the NFL once they were eligible to do so, which allowed them to continue to participate in their sport at the next level. The NFL website was utilized to build a Drafted Early dataset to determine which commits in the study were drafted into the NFL and in what year. The players who were drafted early were excluded from the final analysis. These players were excluded because they made a personal choice to continue to participate in the sport but at a higher level and the researchers did not want to penalize the universities for commits leaving before completing their ACLC for a roster spot at a higher level.
The second subgroup were commits listed as juniors or seniors on their first roster appearance. This subgroup was treated as Transfer-ins because they did not appear to come in as high school recruits. This subgroup was removed from the final analysis because the ACLC was based on high school commits and how they fared in roster survival at the university they committed to right out of high school. After removing the Drafted Early and Transferred-in subgroups, Excel was utilized to index and match the two datasets to determine how many rosters each football commit appeared on at the university he had originally committed to, as well as how many games he played in.
The researchers then sought to identify commits based on the number of rosters on which they appeared, between zero and six rosters. Based on NCAA’s “four in five years” eligibility rule, commits were sorted into two groups. Those who stopped appearing on rosters prior to a fourth season (appearing on zero to three rosters) were classified as having incomplete ACLCs while those who appeared on four or more rosters were classified as having complete ACLCs.
In terms of games played, as mentioned above, the Football Roster dataset utilized the football-statistics section of the NCAA website. The NCAA Football Statistics Manual defines games played as, “It is a game played if a player is in the lineup for even one play, whether or not he touches the ball,” (22, p. 2). Therefore, the researchers utilized the games played stat provided on the roster list but did not attempt to ascertain the amount of time actually spent on the field by the commits.
Utilizing large datasets from third parties, albeit reliable sources, presented challenges as names of commits were sometimes recorded differently between and/or within the datasets. These differences included spelling variations as well as errors. The researchers utilized the Excel add-in Fuzzy Lookup to assist in finding differences such as full first names as compared to shortened first names, use of nicknames, typos, misspellings, and punctuation differences. For example, the researchers made every attempt to identify commits with the same last name but with one dataset listed with the first name “Mike” and the other dataset listing the first name as “Michael” and determine whether they were the same commit
RESULTS
The Football Roster dataset contained 98,731 roster spots and the Football Commit dataset included 17,500 high school football commits recruited between 2002 to 2013 from one of the six-four universities in the Power Five conferences. The subgroups Drafted Early (350) and Transferred-in (1,253) were removed from the final analysis. This reduced the final number in the analysis by 1,603 commits, leaving 15,897 in the final analysis. Table 1 illustrates the percentage breakdown of the roster appearances of commits during their ACLC by conference. The table shows the distribution of the 15,897 commits. The SEC had the most commits, with 3,688, and the Big 12 had the least commits, with 2,484, between 2002 and 2013. The third column, labeled “Zero Rosters” indicates the percentage of commits that never appeared on a single roster for the university that recruited them. As a conference, the Big 12 had the highest percentage of commits that never appeared on a roster at 17.23%. The variance between conferences dissipates slightly once commits appear on at least one roster and then there is roughly a ten-percent loss each year from the first to the third year, across all conferences.
Next, the commits were separated by the number of rosters they appeared on, with those appearing on zero to three rosters placed in the incomplete ACLC group and those who appeared on four or more rosters placed in the complete ACLC group. Table 2 shows the differential between the two groups with 44% of all commits in the study classified as having incomplete ACLCs. The Big 12 conference was the only conference with a greater number of commits with incomplete ACLCs (50.32%) as compared to those classified as completing their ACLC by appearing on four or more rosters (49.68%). The ACC (59.31%) and Big 10 (59.70%) had the highest percentages of commits persist for four or more seasons.
Table 3 displays the ten universities within the Power Five conferences with the largest positive percentage differential between commits classified as completing their ACLCs and those classified as having incomplete ACLCs. The table also provides the total number of commits for each university along with the percentages for the zero to three roster appearances. The breakdown of the zero to three rosters provides a detailed view of when commits with incomplete ACLCs had their ACLCs end. The university with the highest percentage of football commits with completed ACLCs was Northwestern (80.09%) during the 12-year period of the study. This means that only 19.91% of their commits failed to make four or more rosters. This resulted in Northwestern also having the highest positive differential (60.18%) between completed and incomplete ACLCs in the study.
Table 4 displays the ten universities from the Power Five conferences on the other end of the ACLC spectrum, with the largest negative percentage differential between commits classified as completing their ACLCs and those classified as having incomplete ACLCs. The university with the smallest percentage of football commits with completed ACLCs was West Virginia during the studies’ 12-year period. That produced a negative differential of 20.30% between their commits with complete ACLCs (39.85%) and those without (60.15%) for West Virginia. All universities listed on Table 4 had negative differentials, which means that during the 12-year recruiting time period of the study that more than 50% of their commits had their ACLCs end before appearing on a fourth roster. The breakdown of the zero to three rosters percentages on Table 4 depicts when commits with incomplete ACLCs had their ACLCs end and shows that six of the 10 universities had 20% or more of their commits never make it onto a roster.
Beyond roster appearances, the number of games played by each commit was also recorded. Table 5 provides the weighted average of games played for commits by conference. The table breaks down the average games played based on the number of roster appearances from zero to six rosters. The weighted average used the percentages of rosters made for all football commits including those who never appeared on a roster (zero rosters) to reflect the average number of games for all commits at their chosen universities. Overall, the number of games played is similar across conferences, with all commits in the study averaging slightly over 24 games during their ACLC. However, when examined by the number of roster appearances those commits who appeared on only one or two rosters and saw very few games, averaging just under 3 games for those that only appeared on one roster and less than four games per season for those who appeared on only two rosters (7.9 games). A commit that appeared on at least four rosters and survived the many years of roster management saw a large increase in the average number of games played (19.5) compared to those who appeared on three rosters (increasing from 17.1 to 36.6 games).
DISCUSSION AND CONCLUSION
This exploratory study took a different approach in examining the progression of college athletes in “Big Time” college football programs. Traditionally the focus has been on team-based graduation rates using academic measures such as the FGR or the GSR to show how many athletes progressed and earned degrees. This study utilized rosters to better identify the year the athletes stopped being a member of the team to see individual athletes’ roster retention rates. The importance of this study is that it moves the point of our attention from matriculation at the university back to the point at which the player had made a commitment to the university, and ostensibly the university to the player. By doing so we see the number of commits that never make a single roster at their chosen university. This method provided raw data for the discussion on the issue of over signing going forward. As the results showed 1,883 (11.85%) of all commits in the study did not appear on a single roster for the university they committed to during the 12-year period of the study.
The issue of over signing and roster management are complex because the reason a commit does not make an initial roster or why a commit does not continue on the team after appearing on one or more rosters varies. There are valid reasons in which the athlete makes the decision to leave the team, there are legitimate reasons for players to be dismissed or retire from a team, and then there are dubious instances in which coaches make decisions to force their once highly-recruited commit off the team (13,16). As an exploratory study with a large dataset, the researchers did not attempt to go back and do in-depth research on why each commit (with an incomplete ACLC) did not progress onto the next roster or why some commits never appeared on a single roster but rather focused on a longitudinal approach to view the movement of commits over time.
The results raise questions of why there was a substantial difference between universities in terms of maintaining the athletes on rosters. The universities with the largest positive differential between complete and incomplete ACLCs had between 63.33% and 80.09% of their commits appear on four or more rosters. While the universities with the largest negative differential between complete and incomplete ACLCs only had between 39.85% and 46.62% of their commits appear on four or more rosters. When comparing Tables 3 and 4 the biggest difference between the largest positive complete ACLC differential and the largest negative ACLC differential is found in the “Zero Rosters” columns. Eight of the ten universities with the largest positive ACLC differential on Table 3 had less than 5% of their commits never appear on a single roster. Whereas, six of the ten universities with the largest negative ACLC differential had 20% or higher of their commits never appear on a single roster. Because there are no rules or regulation that require athletic departments to document what happened to each commit that did not matriculate to the school or never made it onto a roster, further university by university in-depth research would be required to ascertain why these universities had higher numbers of commits never appear on a single roster. Nevertheless, the ACLC data alone could be valuable information for potential recruits weighing their options between Power Five universities. If future recruits have this information during the recruiting process it would force coaches from programs with negative ACLC differentials to try to explain why their football program has had such high turnover rates throughout the years.
The results of the study also provided the average number of games played by commits in each conference. The overall weighted average was just over 24 games, however, those with shorter ACLCs played in far fewer games. Viewing these results could better prepare future recruits to understand the reality of college football and how the years of long hours practicing might not pay off in as many college football games as they think it will.
APPLICATION IN SPORT
This study has four distinct application to sport and varied stakeholders. 1) High school prospects would benefit from using the new Athlete Commit Life Cycle (ACLC) metric to help determine which programs are best at retaining players over time and from one roster to the next, as compared to the programs with high turnover and where commits are more likely to never make even a single roster. 2) Athletic department administrators can use this data to evaluate the performance of the football program relative to over signing and player retention, from commitment to the last roster a player makes. 3) The ACLC concept can be studied in other sports, tracking players from the time of commitment to last roster and comparisons can be made by sport, gender, and so forth. 4) The NCAA could adopt new rules that require universities to begin tracking players at the time of commitment as opposed to at the time of matriculation for greater transparency of over signing practices. The NCAA could also require documentation from universities on each athlete that failed to complete their ACLC and the reasons why they never made a single roster or were removed from the team after one, two, or three years.
ACKNOWLEDGEMENTS
None
REFERENCES
1. Adler, P., & Adler, P.A. (1985). From idealism to pragmatic detachment: The academic performance of college athletes. Sociology of Education, 58, 241-250.
2. Bateman, J.D., (2011) When the Numbers Don’t Add Up: Oversigning in College Football. Marquette Sports Law Review, 22(1):7-23.
3. Beamon, K. (2008). “Used goods”: Former college student-athletes’ perceptions of exploitation by Division I universities. The Journal of Negro Education, 77(4), 352-364.
4. Big stage, small screen. (2015, February 4). Dallas News. Retrieved from http://res.dallasnews.com/interactives/signing-day-images/
5. Botkin, B. (2016). Five years later, SEC not hurt by its football recruiting signing cap. Retrieved from https://www.cbssports.com/college-football/news/five-years-later-sec-not-hurt-by-its-football-recruiting-signing-cap/
6. Case, B., Greer, S., & Brown, J. (1987). Academic clustering in athletics: Myth or reality? Arena Review, 11(2), 48-56.
7. Codrington, K. (2014). An inside look at the complex world of college football recruiting rankings. Retrieved from http://bleacherreport.com/articles/2117325-an-inside-look-at-the-complex-world-of-college-football-recruiting-rankings
8. Donnor, J. (2005). Toward and interest-convergence in the education of African-American football student-athletes in major college sports. Race, Ethnicity, and Education, 8, 45 67.
9. Finley, P. S., & Fountain, J. J. (2010). An investigation of successful Football Bowl Subdivision coaches and the disproportional academic achievement of their White and African-American football players. Academic Leadership Journal, 8(3), pp. 164-176.
10. Fountain, J., & Finley, P. (2009). Academic majors of upperclassmen football players in the Atlantic Coast Conference: An analysis of clustering comparing white and minority players. Journal of Issues in Intercollegiate Athletics, 2, 1-13.
11. Fountain, J., & Finley, P. (2011). Academic clustering: A longitudinal analysis of a Division I football program. Journal of Issues in Intercollegiate Athletics, 4, 24-41.
12. Fountain, J. J., & Finley, P. S. (2017, April). Roster survival: An analysis of retention rates of the top recruiting FBS football programs. Paper presented at the College Sport Research Institute Annual Conference, Columbia, SC.
13. Goodison, B. (2015). Clemson and recruiting and roster management. Retrieved from https://www.shakinthesouthland.com/2015/4/16/8420615/clemson-football-recruiting-roster-management
14. Gurney, G., Lopiano, E. Snyder, D., Willingham, M., Meyer, J., Porto, B., Ridpath, D.B., Sack, A., and Zimbalist, A. (2015) The Drake Group Position Statement: Why the NCAA Academic Progress Rate (APR) and Graduation Success Rate (GSR) Should Be Abandoned and Replaced with More Effective Academic Metrics. Retrieve from https://drakegroupblog.files.wordpress.com/2015/06/academic-metrics-position-paper-2017.pdf
15. Gurney, G.S., & Southall, R.M. (2012, August 9). College sport’s bait and switch. ESPN College Sports. Retrieved from http://espn.go.com/collegesports/story/_/id/8248046/college-sports-programs-find-multitude-ways-game-ncaa-apr
16. Hinton, M. (2013). Oversigning index: On another front it’s still Alabama and everyone else. Retrieved from https://www.cbssports.com/college-football/news/oversigning-index-on-another-front-its-still-alabama-and-everyone-else/
17. LaForge, L., & Hodge, J. (2011). NCAA academic performance metrics: Implications for institutional policy and practice. The Journal of Higher Education, 82(2), 217-235.
18. Love, A., Gonzalez-Sobrino, B., & Hughey, M.W. (2017). Excessive Celebration? The Racialization of Recruiting Commitments on College Football Internet Message Boards.” Sociology of Sport Journal, 34(3):235-47.
19. Love, A., Watkins, J., & Seungmo, K. (2017). Admissions selectivity and major distribution in big-time college football. Journal of Issues in Intercollegiate Athletics, 10, 1-16.
20. Krohn, A. (2011). Oversigning: An in-depth look into one of college football’s biggest controversies. Retrieved from https://www.gainesvilletimes.com/sports/national-sports/staff-sports-picks/oversigning-an-in-depth-look-into-1-of-college-footballs-biggest-controversies/
21. National Collegiate Athletic Association (NCAA). (2017a). Transfer Terms. Retrieved from http://www.ncaa.org/student-athletes/current/transfer-terms
22. National Collegiate Athletic Association (NCAA). (2017b). Football Statistics Manual. Retrieved from http://www.ncaa.org/championships/statistics/ncaa-football-statisticians-manual
23. Otto, K. (2012). Demonstrating the importance of accuracy in reporting results of academic clustering. Journal for the Study of Sports and Athletes in Education, 6, 293-310.
24. Paule-Koba, A. (2015). Gaining equality in all the wrong areas: An analysis of academic clustering in Women’s NCAA Division I basketball. International Journal of Sport Management, 16, 1-16.
25. Sack, A. L., & Park, E.-A., & Theil, R . (2011). Watch the gap: Explaining retention gaps between FBS football players and the general student body. Journal of Issues in Intercollegiate Athletics, 4, 55 –73.
26. Sanders, J. P., & Hildenbrand, K. (2010). Major concerns: A longitudinal analysis of student athletes’ academic majors in comparative perspective. Journal of Intercollegiate Sport, 3, 213-233.
27. Schneider, R. G., Ross, S. R., & Fisher, M. (2010). Academic clustering and major selection of intercollegiate student-athletes. College Student Journal, 44, 64-70.
28. Southall, R. (2014). NCAA graduation rates: A quarter-century of re-branding academic success. Journal of Intercollegiate Sport, 7, 120-133.
29. Southall, R., & Weiler, J. (2014). NCAA Division-I athletic departments: 21st century athletic company towns. Journal of Issues in Intercollegiate Athletics, 7, 161-186.
30. Staples, A. (2011). Oversigning offenders won’t be curbed by NCAA’s toothless rule. Retrieved from https://www.si.com/more-sports/2011/01/24/oversigning
31. Upthegrove, T., Roscigno, V., & Charles, C. (1999). Big money collegiate sports: Racial concentration, contradictory pressures, and academic performance. Social Science Quarterly, 80, 718-787.