Submitted by Frederick Wiseman & John Friar

ABSTRACT
The NCAA has become increasingly concerned about the academic well-being of its student-athletes and has adopted a new measure to monitor the academic progress of each collegiate team that grants athletic scholarships. It is called the Academic Progress Rate (APR) and measures the extent to which a team’s student-athletes retain their eligibility and stay in school on a semester by semester basis. Lucas and Lovaglia (2005) believed that the ranking of collegiate teams based upon both academic success and athletic performance would be of value to numerous constituencies and developed such a ranking system for collegiate football teams. In this paper, we build upon their work by constructing a statistically based ranking system that utilizes a team’s multi-year APR and their Average Saragin Rating. During the 2007-2010 academic years that were investigated, the three top ranked teams were Ohio State, Boise State and the University of Florida. The results also revealed a moderate and positive correlation between a team’s multi-year APR and its Average Saragin Rating.

INTRODUCTION
The NCAA recently implemented a number of changes that were designed to improve the academic well-being of student-athletes. These changes fell into four categories: (i) new initial eligibility standards, (ii) new requirements for two-year college transfers, (iii) new requirements for post-season eligibility, and (iv) new penalty structures and thresholds. Refer to Harrison (2012) for an excellent review of these changes and for a history of the NCAA’s commitment to the academic success of student-athletes. A key component of this academic reform movement has been the development and adoption of the APR for each collegiate sports team that grants athletic scholarships (8). The APR measures the extent to which student-athletes are maintaining their eligibility and staying in school. It represents a significant improvement over previous measures of academic success because it provides specific information on the extent to which current student-athletes are satisfactorily completing the academic requirements that are necessary to obtain a degree.

The APR is a number between 0 and 1000. On a term by term basis, each student-athlete receiving athletic aid earns one retention point for staying in school and one eligibility point for remaining academically eligible. Excluded from this calculation are those team members who decide either to leave school to sign a professional contract or to transfer to another school, provided in each instance they were still eligible to compete when they left school. The total points earned by team members are divided by the possible number of total points that could be earned, and then this proportion is multiplied by one thousand. The resulting figure is the APR score for the team. Defined in this manner, the APR provides an up-to-date measure that can be used to evaluate the academic success and the academic culture of collegiate sports teams at a given point in time. It also allows comparisons to be made between teams playing the same sport throughout the country. Each year, the NCAA reports for each team both a single-year APR and a multi-year APR, based upon the last four seasons.

For the 2012-2013 academic year, teams in all sports must have obtained a multi-year APR of at least 900 or have an average single-year APR of at least 930 in the last two academic years in order to be eligible to compete in post-season play. This requirement disqualified the University of Connecticut’s Men’s Basketball team, the 2010-2011 national champions, from participating in the 2012-2013 tournament. The APR requirement becomes more stringent each year until it reaches its desired multi-year standard of 930 in 2015-2016.

The use and wide-spread acceptance of the multi-year APR as an appropriate measure of academic success has had a dramatic effect on college sports teams and athletic departments. It is the single most important measure of academic success for the NCAA and is closely monitored by athletic directors, coaches, and academic advisors. This is especially true at the elite revenue producing schools competing in football and basketball where the imposition of penalties can result in a substantial loss of revenue and prestige.

Lucas and Lovaglia (2005) and, more recently, Crotty (2012), indicated that it would be of value to rank college athletic teams using a combined measure of academic success and athletic performance. They argued that such a measure would be of benefit not only to university administrators when evaluating their athletic programs, but also to potential student-athletes when choosing a college to attend. Lucas and Lovaglia proposed such a measure for Division 1 football programs and called it the Student-Athlete Performance Rate (SAPR). To arrive at a team’s SAPR score, they added a team’s single year APR to a team’s Athletic Success Rate (ASR). Unfortunately, the ASR that was developed was seriously flawed because it did not provide a valid measure of a team’s athletic performance for a particular period in time, but rather it was designed to measure the “well-being” of a team over some undefined period of time. For example, included in the construction of the ASR were such factors as a team’s all-time winning percentage, its average attendance in a particular year, the number of conference championships won in the last five years, and the number of student-athletes that went on to play in the National Football League.

In this paper, we build upon this previous work by describing a statistical ranking system which is based upon valid measures of both athletic success and athletic performance. This ranking system is then applied to the 120 members of the NCAA’s Football Bowl Subdivision (FBS). The methods used to create such a ranking system are described in the next section.

METHODS
A valid measure of academic success and a valid measure of athletic performance are required to construct an overall ranking of schools. For the academic success measure, we use a school’s multi-year APR score (http://www.ncaa.org). At present, the latest available multi-year APRs are based upon the 2007-2010 seasons. Since the multi-year APR is based upon a four year period of time, the athletic performance measure used should also cover this same time period. Various performance measures were considered before it was decided to use the average of the Saragin ratings at the end of each season for each team. A team’s Saragin rating is based on three factors (i) won/loss record, (ii) strength of schedule, and (iii) margin of victory. The Saragin ratings for college football teams have been published on a weekly basis by USA TODAY since 1998 and have been used to help determine which teams will play in the national championship game. The ratings can be found at: http://www.usatoday.com/sports/sagarin-archive.htm.

To obtain an overall ranking of schools, we employ a methodology using two independent standardized z-scores as previously described by Wiseman et al. (2007). With this methodology, we first have to determine whether a correlation exists between a team’s multi-year APR and its Average Saragin Rating. If a correlation does exist, we cannot simply compute standardized z-scores for each individual rating and add them together to obtain an overall measure. Instead, we need to construct two standardized z-scores which are independent and which take into account the correlation that exists. For each school, these z-scores would be: (i) the standardized z-score for the Average Saragin Rating and (ii) the standardized z-score for the multi-year APR given the Average Saragin Rating. The first standardized z-score would be computed as follows:

Screen Shot 2014-02-07 at 4.34.10 PM

For the second z-score, we would need to calculate the expected multi-year APR given the Average Saragin Rating and the standard deviation of the multi-year APR given the Average Saragin Rating. To obtain these values, we compute:

Screen Shot 2014-02-07 at 4.34.54 PM

where µAPR is the average multi-year APR, p is the correlation coefficient between the Average Saragin Rating and the multi-year APR, and σAPR is the standard deviation of the multi-year APRs. Given the above, the second z-score would be:
Screen Shot 2014-02-07 at 4.36.05 PM

Statistical theory concerning bivariate normal distributions tells us that the standardized z-scores for the Average Saragin Rating and for the multi-year APR given the Average Saragin Rating will each have a mean of 0.0 and a standard deviation of 1.0. Further, since the two z-scores are statistically independent, they can be added together to obtain an overall summated z-score for combined athletic performance and academic success. The higher the overall value of Zsum = ZSaragin(i)+ ZAPR|Saragin, the higher the overall ranking.

RESULTS
Table 1 presents the Average Saragin Ratings for the four seasons as well as the multi-year APR score for all 120 schools. The average Saragin rating (µSaragin) was 70.62 with a standard deviation (σSaragin) of 10.03. The highest average ratings were obtained by the following five universities: Florida (90.85), Alabama (90.82), Oklahoma (89.17), Oregon (88.98), and Ohio State (88.35). The average multi-year APR (µAPR) was 951.97 with a standard deviation (σAPR) of 18.27. The five highest ranked universities according to their multi-year APR were Northwestern (995), Boise State (989), Duke (989), Ohio State (988), and Rice (986).

Table 1. Average Saragin Rating and Multi-Year APR for Schools in the Football Bowl Subdivision: 2007-2010
Screen Shot 2014-02-07 at 4.37.50 PM

When the individual schools were grouped by conference, the Average Saragin Ratings of the six major conferences (SEC, Big East, Pac 10, Big 10, Big 12 and ACC) were all greater than the five smaller conferences (Mid-American, Sun Belt, Mountain West, Conference USA and Western Athletic). In terms of the academic success measure, similar results were obtained except for the Mountain West conference which had a higher multi-year APR than three of the major conferences — the Pac 10, the Big East and the Big 12. These results are shown in Figure 1.

Figure 1. Scatterplot of Average Saragin Rating and Average Multi-Year APR by Conference: 2007-2010
Screen Shot 2014-02-07 at 4.43.49 PM

A correlation of r =.32 (p<.01) was found between the Average Saragin Rating and the multi-year APR. This finding is similar to results obtained in earlier studies that used graduation rates as the measure of academic success (1,3,6,7,9,10). Universities that ranked highly on both measures were Florida (1st and 19th), Alabama (2nd and 23rd), Oklahoma (3rd and 23rd), Ohio State (5th and 4th), Boise State (10th and 2nd), and TCU (8th and 17th). Given the correlation that existed between multi-year APR and the Average Saragin Rating, the two independent z-scores (ZSaragin(i) and ZAPR|Saragin(i)) were obtained and then added together for each of the 120 schools. These scores are presented in Table 2 and the top ranked school for the four year period was Ohio State which had the fifth highest Average Saragin Rating of 88.35 and the fourth highest multi-year APR of 988. Boise State, Florida, Alabama, and Northwestern had the next highest rankings. The Anderson-Darling test was used to test for normality of the two z-scores and the test results revealed that the normality assumption could not be rejected at the 5% level of significance. Table 2. Comparison of the Top 25 Ranked Schools using Alternative Ranking Methods: 2007-2010*
Screen Shot 2014-02-07 at 4.38.38 PM

The previous ranking of schools gave equal weight to athletic performance and to academic success. Giving such equal weight to each of the components was originally suggested by Lucas and Lovaglia (2005). However, one could also argue that more weight should be given to the athletic performance measure since this would be a national ranking of football teams. Our methodology allows for that possibility. For example, if we decided to give the first z-score (ZSaragin) a weight of .8 and the second z-score (ZAPR|Saragin) a weight of .2, we could obtain a weighted average of the two z-scores and re-rank the schools based upon this weighted average. The results of such a weighting are shown in Table 3 and, once again, the top ranked school was Ohio State. It was followed by Florida, Alabama, Oklahoma, and Boise State. Now, however, seven new schools entered into the Top 25. These were schools that had relatively strong Average Saragin Ratings but relatively poorer multi-year APRs. They were South Carolina, Oregon, Oregon State, West Virginia, USC, Auburn, and Texas. The schools that they replaced were Northwestern, Air Force Academy, Rutgers, Duke, Georgia Tech, Kansas, and Wake Forest. These latter schools are generally better known for their academics than for their football success.

Sensitivity analyses were conducted in order to determine how the rankings would have changed if different weights were used. These analyses were conducted for the following three sets of weights –(.9, .1), (.75, .25) and (.7, .3), where the first number is the weight given to the Average Saragin Rating and the second number is the weight given to the multi-year APR given the Average Saragin Rating. The rank correlations between each of these three weighting schemes and the (.8, .2) weighting scheme that was originally used were .97, .98 and .91, respectively. This indicates that the actual rankings were relatively insensitive to the actual weights used when the weight given to the Average Saragin Rating was .7 or higher.

DISCUSSION
We have shown that it is possible to have a combined ranking of athletic teams based upon athletic and academic success. The results of this ranking for the 2007-2010 period indicated a positive correlation between athletic success and academic success. The analysis also revealed that the top performing football schools are more likely to ensure that their student-athletes stay in school and maintain their eligibility. This makes sense because such schools have the most to lose financially if their teams are not eligible for post-season play. These teams also spend a considerable amount of time, effort, and money recruiting promising student-athletes. They also generate a substantial amount of money and provide a high level of academic support and facilities for its student-athletes in order to help them maintain their eligibility. Such a high level of support may not be available for student-athletes at poorer performing schools with fewer resources devoted to their academic well-being.

Additionally, many of these large and successful schools offer specific programs geared to the student-athlete population, thus increasing the likelihood of their academic success. Similarly, these schools may also be less likely to take a chance on a potential student-athlete with significant athletic potential, but with very little chance of academic success at their school.

CONCLUSION
The ranking of FBS teams was based upon their success in the classroom and their performance on the football field. Rankings for other collegiate sports, both male and female, including the non-revenue sports could easily be obtained using the methodology described. In addition, it would be of interest to identify the key factors that lead certain teams to have high APRs, while other teams have low APRs. Such an investigation is the subject of future research, and should be of value to numerous groups including the NCAA as it continues with its academic reform movement.

APPLICATIONS IN SPORT
Critics and some members of the NCAA have argued that the organization should increase its emphasis on the academic well-being of its student-athletes. This has led to the academic reform movement that has taken place in recent years. With the methods presented here, the NCAA could recognize schools that excel both on the field and in the classroom. It also gives member schools a scorecard as to how well its team is doing on and off the field. The results also indicate that the schools with the weakest football teams in the non-BCS conferences, often times, are also the ones whose teams have the lowest multi-year APRs. Reasons for these differences should be investigated so that corrective actions can be undertaken which will enable all student-athletes to increase their likelihood of academic success.

REFERENCES

1. Comeau, E. (2005). Predictors of academic advancement among student-athletes in the revenue-producing sports of men’s basketball and football. The Sport Journal, 8. Available at: http://www.thesportjournal.org/article/predictors-academic-achievement-among-student-athletes-revenue-producing-sports-mens-basketb.

2. Crotty, J. M. (2012). When it comes to academics football crushes basketball. Available at: http://www.forbes.com/sites/jamesmarshallcrotty/2012/01/05/northwestern-vs-rutgers-in-bcs-championship-if-education-was-yardstick/.

3. DeBrock, L., Hendricks, W., & Koenker, R. (1996). The economics of persistence: Graduation rates of athletes as labor market choice. Journal of Human Resources, 31, 513-539.

4. Harrison, W. (2012). NCAA academic performance program (APP): Future directions. Journal of Intercollegiate Sports, 5, 65-82.

5. Lucas, J. W. & Lovaglia, M. J. (2005). Can academic progress help collegiate football teams win? The Sport Journal, 8. Available at: http://www.thesportjournal.org/article/can-academic-progress-help-collegiate-football-teams-win.

6. Mangold, W. D., Bean, L. and Adams, D. (2003). The impact of intercollegiate athletics on graduation rates among major NCAA Division I universities: Implications for college persistence theory and practice. The Journal of Higher Education, 74, 540-562.

7. Mixon, F. G. & Trevino, L. J. (2005). From kickoff to commencement: the positive role of intercollegiate athletics in higher education. Economics of Education Review, 24, 97-102.

8. NCAA. (2012). Academic Progress Rate. Available at: http://www.ncaa.org/wps/wcm/connect/public/NCAA/Academics/Division+I/Academic+Progress+Rate.

9. Rishe, P. J. (1993). A reexamination of how athletic success impacts graduation rates: Comparing student-athletes to all other undergraduates – the university. American Journal of Economics and Sociology (April). Available at: http://onlinelibrary.wiley.com/doi/10.1111/1536-7150.00219/pdf.

10. Tucker, I. B. (2004). A reexamination of the effects of big-time football and basketball success on graduation rates and alumni giving rates. Economics of Education Review, 23, 655-661.

11. Wiseman, F., Habibullah, M. & Yilmaz, M. (2007). A new method for ranking total driving performance on the PGA Tour. The Sport Journal, 1. Available at: http://www.thesportjournal.org/article/new-method-ranking-total-driving-performance-pga-tour.

Print Friendly, PDF & Email