Authors: Katie M. Sell, Ph.D., CSCS, TSAC-F, ACSM EP-C
Department of Health Professions, Hofstra University, NY
Jamie Ghigiarelli, Ph.D., CSCS, USAW, CISSN
Department of Health Professions, Hofstra University, NY
Katie M. Sell, Ph.D., CSCS, TSAC-F, ACSM EP-C
Department of Health Professions, 101 Hofstra Dome, 220 Hofstra University, Hempstead, NY 11549
Comparison of Laboratory and Field-Based Predictors of 5-km Race Performance in Division I Cross-Country Runners
ABSTRACT Purpose: The purpose of this study was to examine the predictive capabilities of laboratory- (VO2max, VO2@VT) versus field-based performance variables (2-mile trial time; 2-MTT) in determining 5-km performance time in collegiate cross-country runners. Methods: Twenty Division I college cross-country runners completed a 2-MTT on an outdoor track, a VO2max test under controlled laboratory settings, and a 5-km run under competitive conditions. All tests were completed within a 10-day timeframe. Oxygen uptake during the VO2max test was measured during treadmill running using open circuit spirometry. Oxygen consumption at ventilatory threshold (VO2@VT) was determined using the ventilatory equivalent method. Results: Significant correlations were observed between each predictor variable and 5-km performance time. Regression analyses revealed that 2-MTT and VO2@VT contributed significantly to predicting 5-km race performance (r2 = 0.90, p<0.05). Conclusions: For the highly trained runners in this study, 2-MTT and VO2@VT are among the variables best able to predict 5-km race performance, and accounted for a similar magnitude of variance in 5-km performance time. Applications in Sport: A 2-MTT is cheaper, quicker, and more feasible to administer than a VO2max test to determine VT during the short pre-season and intensive in-season inherent in collegiate cross-country schedules. Given the results of this study, the 2-MTT may present an attractive alternative to laboratory testing as a means to monitor cross-country runner’s progress throughout a season. (more…)
U.S. Sports Academy2017-08-07T09:29:26-05:00September 21st, 2017|Sports Coaching|Comments Off on Comparison of Laboratory and Field-Based Predictors of 5-km Race Performance in Division I Cross-Country Runners
Submitted by Shelley L. Holden, Steven F. Pugh, Phillip M. Norrell and Christopher M. Keshock
Alabama has one of the highest rates of obesity in the U.S. and nutritional knowledge may be a factor in those statistics. Recent studies found more than a third of U.S. adults, and over 16% of the population were obese in 2009-2010. In 1986, Alabama’s obesity rate was less than 10%, compared to more than 30% in 2010. The reasons cited for the increase included lack of nutritional knowledge. The purpose of this study was to determine the nutritional knowledge of undergraduate college students at one university in Alabama. The 229 participants (87 male, 142 female) were undergraduates enrolled in health and physical education courses at a state university. None previously had a college nutrition course. Ages ranged from 18 to 58 (M= 22.3). There were 40 freshman, 50 sophomores, 85 juniors, 38 seniors, 7 fifth year seniors, and 9 non-degree students. Nutritional knowledge was assessed using the Nutrition Knowledge Questionnaire (NKQ). The NKQ meets psychometric criteria for reliability (Cronbach’s alpha=.70-.97 and construct validity, P=.001). The NKQ is divided into subscales: Dietary Recommendations (DR), Sources of Foods (SOF), Choosing Everyday Foods (CEF), Diet-Disease Relationships (DDR), and Total Score (TS). The survey was administered the first day of class. Results indicated a lack of nutritional knowledge in all subscales of the NKQ. The mean scores were 6.98 (63.4%) on the DR, 35.3 (51.1%) SOF, 4.1 (41%) CEF, 5.1 (25.5%) DDR and 51.5 (46.8%) TS. Nutritional knowledge has been cited as a factor in increasing rates of obesity and by falling far short of an acceptable level on all the subscales the participants scores are a serious concern. Students lacked the nutritional knowledge to make good dietary choices. The researchers realize that other factors (genetics, physiology, exercise) play a role in obesity. However, students must be better educated in nutrition. Further, nutritional education guidelines as set by the State Course of Study need to be examined.
It is estimated that 300,000 people in the United States die each year as a result of conditions relating to obesity and more than 60% of adolescents and adults are underactive (2). Obesity is a also a major concern in the United States due to rate at which it is increasing in the general population. In 2000, no state had a prevalence of obesity less than 10%, 23 states had between 20-24%, and none had rates of obesity greater than 25% (1). However, in 2010 no state had a prevalence of obesity less than 20%, 36 states had a prevalence equal to or greater than 25% and 12 of these states (Alabama, Arkansas, Kentucky, Louisiana, Michigan, Mississippi, Missouri, Oklahoma, South Carolina, Tennessee, Texas and West Virginia) had a prevalence equal to or greater than 30% (1, 2). Further, more than one third of adults and approximately 17% of U. S. youth were considered obese in 2009-2010 (5).
This is of great concern because of the rising costs of healthcare associated with the chronic diseases related to obesity. On average obesity costs the U. S. health care system $117 billion per year in direct medical costs, but does not include indirect expenses (loss of wages and decreased productivity) (2).
Alabama has not been immune to the increase in its obese population. In 1986, Alabama had an obesity rate of less than 10% compared to more than 30% in 2010 (1-3). Other concerns are the variables involved in the increase in rate of obesity within the state. Prior research has indicated the lack of nutritional knowledge as a potential variable in the increase in the prevalence of obesity within the state. Therefore, the purpose of the current study was to determine the nutritional knowledge of Alabama undergraduate college students with no prior nutrition course at the college level.
The 229 participants (87 male, 142 female) in this study were undergraduates enrolled in health and physical education courses at a state university. None of the participants in the study had previously taken a college nutrition course.
For the purposes of this study, the following definition of obesity was used: Obesity: body mass index (BMI) > 30. BMI is calculated as weight in kilograms divided by height in meters squared, rounded to one decimal place (4).
Nutritional knowledge was assessed using the Nutrition Knowledge Questionnaire (NKQ) that was developed by Parmenter and Wardle (6). The NKQ meets psychometric criteria for reliability (Cronbach’s alpha=.70-.97 and construct validity, P=.001). Validity and reliability studies have been conducted on the questionnaire as a whole, as well as, each section separately (6).
The NKQ is divided into four independent sections and a total score: Dietary Recommendations (DR), Sources of Foods/Nutrients (SOFN), Choosing Everyday Foods (CEF), Diet-Disease Relationships (DDR), and Total Score (TS). Each correct answer in the section carried a point value of one and each section also had a corresponding maximum score (Section I- DR= 11, Section II- SOFN= 69, Section III- CEF= 10, Section IV- DDR= 20 and TS= 110).
Approval for the study was obtained from the Institutional Review Board (IRB) of the researcher’s university. The survey was administered the first day of class before any type of nutrition lesson was taught. The researcher eliminated all surveys of participants who had previously taken a nutrition course at the college level (junior college or 4-year college), those who were graduate students, and those who did not complete all questions on the instrument. Therefore, 116 participants were omitted from this study.
The dependent variables in this study were the sections of the NKQ (Dietary Recommendations (DR), Sources of Foods/Nutrients (SOFN), Choosing Everyday Foods (CEF), Diet-Disease Relationships (DDR), and Total Score (TS). The independent variable was undergraduate students enrolled in the health and physical education courses offered at the university.
Table 1 presents the demographic characteristics of the sample. Ages of the participants ranged from 18 to 58 (M= 22.3). Thirty-four (14.8%) of the participants were on an intercollegiate athletics team at the university and 195 (85.2%) were not. In terms of nutritional knowledge, results indicated a lack of nutritional knowledge in all sections of the NKQ. The mean score was 6.98 (63.4%) on the DR section which measures Dietary Recommendations. That is, an indication that the participants had little knowledge of the categories of food selections, the recommended servings for those categories, or what is considered a portion amongst the various categories. The mean score was 35.3 (51.1%) in the SOF section which measures sources of nutrients in foods. The mean score was 4.1 (41%) in the CEF section which measures choosing everyday foods which includes the ability to make healthy and unhealthy choices. The mean score was 5.1 (25.5%) in the DDR section that measures the diet-disease relationship. More specifically, foods that are related to health issues such as limiting saturated fats. Finally, the mean total score on the instrument was 51.5 (46.8%) which is a measure of overall knowledge.
Nutritional knowledge has been cited as a factor in increasing rates of obesity and the results of the current study support this factor. Moreover, the undergraduates’ scores falling far short on all the sections could indicate a general lack of nutritional knowledge and therefore, a serious concern with regard to the rising rates of obesity in the state. Students in the current study lacked the nutritional knowledge to make sound dietary choices. The researchers realize that factors other than nutritional knowledge such as genetics, physiology, mindfulness and exercise play a role in obesity too, but the lack of nutritional knowledge of the students in this study cannot be ignored.
Nutritional education guidelines as set by the State Course of Study in elementary, middle, and high school need to be examined to ensure adequate coverage of this vital topic if the rate of obesity is to be halted or preferably lowered. Also, it is imperative that teachers instructing health courses actually follow and meet the standards set forth in the courses of study. Adequate preparation of teachers is also an issue as noted by Graves, Farthing, Smith, and Turchi, (3) and Scofield and Unruh (7). “Sport coaches” who often have a lack of sufficient nutritional knowledge tend to teach health and nutrition courses and/or provide student athletes with nutritional information that could be potentially incorrect or insufficient. This is of grave concern because research has cited high school coaches as one of the most likely sources for students to seek nutritional knowledge (7, 2).
Applications in Sport
It is vital that we ensure that the information provided to students regarding nutrition is accurate and that there is an identifiable source for this information in the course of study.
Future research might examine the degree to which nutrition is covered in the state courses of study, and the degree to which the standards in the course of study are met within the K-12 classes. Another interesting inquiry would be to use body composition rather than Body Mass Index (BMI) as a measure of obesity as body composition, not BMI, is the major health concern. Body Mass Index, particularly in athletes, may show a false positive reading for overweight as many athletes’ musculature would indicate them as overweight using BMI when their body fat might be well within a health range.
1. Center for Disease Control. (2012). Obesity trends among U.S. adults between 1985-2010. Atlanta, GA: Author.
2. Edwards, B. (2005). Childhood obesity: a school-based approach to increase nutritional knowledge and activity levels. Nursing Clinics of North America, 40, 661-669. doi:10.1016/j.cnur.2005.07.006
3. Graves, K. L., Farthing, M. C., Smith, S. A, & Turchi, J. M. (1991). Nutritional training, attitudes, knowledge, recommendations, responsibility, and resource utilization of high school coaches and trainers. Journal of the American Dietetic Association, 91(3), 321-324.
4. National Center for Health Statistics. (2010, December). Obesity and socioeconomic status in adults: United States, 2005-2008 (Issue Brief No. 50). Hyattsville, MD: Ogden, C.L., Lamb, M. M., Carroll, & Flegal, K. M.
5. National Center for Health Statistics. (2012, January). Prevalence of obesity in the United States, 2009-2010 (Issue Brief No. 82). Hyattsville, MD: Ogden, C.L., Carroll, M. D., Kit, B. K., & Flegal, K. M.
6. Parmenter, K. & Wardle, J. (1999). Development of a general knowledge questionnaire for adults. European Journal of Clinical Nutrition, 53, 298- 308.
7. Scofield, D. E., Unruh, S. (2006). Dietary supplement use among high school athletes in central Nebraska and their sources of information. Journal of Strength and Conditioning Research, 20(2), 452-455. Doi:10.1519/R- 16984.1.
The NCAA has become increasingly concerned about the academic well-being of its student-athletes and has adopted a new measure to monitor the academic progress of each collegiate team that grants athletic scholarships. It is called the Academic Progress Rate (APR) and measures the extent to which a team’s student-athletes retain their eligibility and stay in school on a semester by semester basis. Lucas and Lovaglia (2005) believed that the ranking of collegiate teams based upon both academic success and athletic performance would be of value to numerous constituencies and developed such a ranking system for collegiate football teams. In this paper, we build upon their work by constructing a statistically based ranking system that utilizes a team’s multi-year APR and their Average Saragin Rating. During the 2007-2010 academic years that were investigated, the three top ranked teams were Ohio State, Boise State and the University of Florida. The results also revealed a moderate and positive correlation between a team’s multi-year APR and its Average Saragin Rating.
The NCAA recently implemented a number of changes that were designed to improve the academic well-being of student-athletes. These changes fell into four categories: (i) new initial eligibility standards, (ii) new requirements for two-year college transfers, (iii) new requirements for post-season eligibility, and (iv) new penalty structures and thresholds. Refer to Harrison (2012) for an excellent review of these changes and for a history of the NCAA’s commitment to the academic success of student-athletes. A key component of this academic reform movement has been the development and adoption of the APR for each collegiate sports team that grants athletic scholarships (8). The APR measures the extent to which student-athletes are maintaining their eligibility and staying in school. It represents a significant improvement over previous measures of academic success because it provides specific information on the extent to which current student-athletes are satisfactorily completing the academic requirements that are necessary to obtain a degree.
The APR is a number between 0 and 1000. On a term by term basis, each student-athlete receiving athletic aid earns one retention point for staying in school and one eligibility point for remaining academically eligible. Excluded from this calculation are those team members who decide either to leave school to sign a professional contract or to transfer to another school, provided in each instance they were still eligible to compete when they left school. The total points earned by team members are divided by the possible number of total points that could be earned, and then this proportion is multiplied by one thousand. The resulting figure is the APR score for the team. Defined in this manner, the APR provides an up-to-date measure that can be used to evaluate the academic success and the academic culture of collegiate sports teams at a given point in time. It also allows comparisons to be made between teams playing the same sport throughout the country. Each year, the NCAA reports for each team both a single-year APR and a multi-year APR, based upon the last four seasons.
For the 2012-2013 academic year, teams in all sports must have obtained a multi-year APR of at least 900 or have an average single-year APR of at least 930 in the last two academic years in order to be eligible to compete in post-season play. This requirement disqualified the University of Connecticut’s Men’s Basketball team, the 2010-2011 national champions, from participating in the 2012-2013 tournament. The APR requirement becomes more stringent each year until it reaches its desired multi-year standard of 930 in 2015-2016.
The use and wide-spread acceptance of the multi-year APR as an appropriate measure of academic success has had a dramatic effect on college sports teams and athletic departments. It is the single most important measure of academic success for the NCAA and is closely monitored by athletic directors, coaches, and academic advisors. This is especially true at the elite revenue producing schools competing in football and basketball where the imposition of penalties can result in a substantial loss of revenue and prestige.
Lucas and Lovaglia (2005) and, more recently, Crotty (2012), indicated that it would be of value to rank college athletic teams using a combined measure of academic success and athletic performance. They argued that such a measure would be of benefit not only to university administrators when evaluating their athletic programs, but also to potential student-athletes when choosing a college to attend. Lucas and Lovaglia proposed such a measure for Division 1 football programs and called it the Student-Athlete Performance Rate (SAPR). To arrive at a team’s SAPR score, they added a team’s single year APR to a team’s Athletic Success Rate (ASR). Unfortunately, the ASR that was developed was seriously flawed because it did not provide a valid measure of a team’s athletic performance for a particular period in time, but rather it was designed to measure the “well-being” of a team over some undefined period of time. For example, included in the construction of the ASR were such factors as a team’s all-time winning percentage, its average attendance in a particular year, the number of conference championships won in the last five years, and the number of student-athletes that went on to play in the National Football League.
In this paper, we build upon this previous work by describing a statistical ranking system which is based upon valid measures of both athletic success and athletic performance. This ranking system is then applied to the 120 members of the NCAA’s Football Bowl Subdivision (FBS). The methods used to create such a ranking system are described in the next section.
A valid measure of academic success and a valid measure of athletic performance are required to construct an overall ranking of schools. For the academic success measure, we use a school’s multi-year APR score (http://www.ncaa.org). At present, the latest available multi-year APRs are based upon the 2007-2010 seasons. Since the multi-year APR is based upon a four year period of time, the athletic performance measure used should also cover this same time period. Various performance measures were considered before it was decided to use the average of the Saragin ratings at the end of each season for each team. A team’s Saragin rating is based on three factors (i) won/loss record, (ii) strength of schedule, and (iii) margin of victory. The Saragin ratings for college football teams have been published on a weekly basis by USA TODAY since 1998 and have been used to help determine which teams will play in the national championship game. The ratings can be found at: http://www.usatoday.com/sports/sagarin-archive.htm.
To obtain an overall ranking of schools, we employ a methodology using two independent standardized z-scores as previously described by Wiseman et al. (2007). With this methodology, we first have to determine whether a correlation exists between a team’s multi-year APR and its Average Saragin Rating. If a correlation does exist, we cannot simply compute standardized z-scores for each individual rating and add them together to obtain an overall measure. Instead, we need to construct two standardized z-scores which are independent and which take into account the correlation that exists. For each school, these z-scores would be: (i) the standardized z-score for the Average Saragin Rating and (ii) the standardized z-score for the multi-year APR given the Average Saragin Rating. The first standardized z-score would be computed as follows:
For the second z-score, we would need to calculate the expected multi-year APR given the Average Saragin Rating and the standard deviation of the multi-year APR given the Average Saragin Rating. To obtain these values, we compute:
where µAPR is the average multi-year APR, p is the correlation coefficient between the Average Saragin Rating and the multi-year APR, and σAPR is the standard deviation of the multi-year APRs. Given the above, the second z-score would be:
Statistical theory concerning bivariate normal distributions tells us that the standardized z-scores for the Average Saragin Rating and for the multi-year APR given the Average Saragin Rating will each have a mean of 0.0 and a standard deviation of 1.0. Further, since the two z-scores are statistically independent, they can be added together to obtain an overall summated z-score for combined athletic performance and academic success. The higher the overall value of Zsum = ZSaragin(i)+ ZAPR|Saragin, the higher the overall ranking.
Table 1 presents the Average Saragin Ratings for the four seasons as well as the multi-year APR score for all 120 schools. The average Saragin rating (µSaragin) was 70.62 with a standard deviation (σSaragin) of 10.03. The highest average ratings were obtained by the following five universities: Florida (90.85), Alabama (90.82), Oklahoma (89.17), Oregon (88.98), and Ohio State (88.35). The average multi-year APR (µAPR) was 951.97 with a standard deviation (σAPR) of 18.27. The five highest ranked universities according to their multi-year APR were Northwestern (995), Boise State (989), Duke (989), Ohio State (988), and Rice (986).
Table 1. Average Saragin Rating and Multi-Year APR for Schools in the Football Bowl Subdivision: 2007-2010
When the individual schools were grouped by conference, the Average Saragin Ratings of the six major conferences (SEC, Big East, Pac 10, Big 10, Big 12 and ACC) were all greater than the five smaller conferences (Mid-American, Sun Belt, Mountain West, Conference USA and Western Athletic). In terms of the academic success measure, similar results were obtained except for the Mountain West conference which had a higher multi-year APR than three of the major conferences — the Pac 10, the Big East and the Big 12. These results are shown in Figure 1.
Figure 1. Scatterplot of Average Saragin Rating and Average Multi-Year APR by Conference: 2007-2010
A correlation of r =.32 (p<.01) was found between the Average Saragin Rating and the multi-year APR. This finding is similar to results obtained in earlier studies that used graduation rates as the measure of academic success (1,3,6,7,9,10). Universities that ranked highly on both measures were Florida (1st and 19th), Alabama (2nd and 23rd), Oklahoma (3rd and 23rd), Ohio State (5th and 4th), Boise State (10th and 2nd), and TCU (8th and 17th). Given the correlation that existed between multi-year APR and the Average Saragin Rating, the two independent z-scores (ZSaragin(i) and ZAPR|Saragin(i)) were obtained and then added together for each of the 120 schools. These scores are presented in Table 2 and the top ranked school for the four year period was Ohio State which had the fifth highest Average Saragin Rating of 88.35 and the fourth highest multi-year APR of 988. Boise State, Florida, Alabama, and Northwestern had the next highest rankings. The Anderson-Darling test was used to test for normality of the two z-scores and the test results revealed that the normality assumption could not be rejected at the 5% level of significance.
Table 2. Comparison of the Top 25 Ranked Schools using Alternative Ranking Methods: 2007-2010*
The previous ranking of schools gave equal weight to athletic performance and to academic success. Giving such equal weight to each of the components was originally suggested by Lucas and Lovaglia (2005). However, one could also argue that more weight should be given to the athletic performance measure since this would be a national ranking of football teams. Our methodology allows for that possibility. For example, if we decided to give the first z-score (ZSaragin) a weight of .8 and the second z-score (ZAPR|Saragin) a weight of .2, we could obtain a weighted average of the two z-scores and re-rank the schools based upon this weighted average. The results of such a weighting are shown in Table 3 and, once again, the top ranked school was Ohio State. It was followed by Florida, Alabama, Oklahoma, and Boise State. Now, however, seven new schools entered into the Top 25. These were schools that had relatively strong Average Saragin Ratings but relatively poorer multi-year APRs. They were South Carolina, Oregon, Oregon State, West Virginia, USC, Auburn, and Texas. The schools that they replaced were Northwestern, Air Force Academy, Rutgers, Duke, Georgia Tech, Kansas, and Wake Forest. These latter schools are generally better known for their academics than for their football success.
Sensitivity analyses were conducted in order to determine how the rankings would have changed if different weights were used. These analyses were conducted for the following three sets of weights –(.9, .1), (.75, .25) and (.7, .3), where the first number is the weight given to the Average Saragin Rating and the second number is the weight given to the multi-year APR given the Average Saragin Rating. The rank correlations between each of these three weighting schemes and the (.8, .2) weighting scheme that was originally used were .97, .98 and .91, respectively. This indicates that the actual rankings were relatively insensitive to the actual weights used when the weight given to the Average Saragin Rating was .7 or higher.
We have shown that it is possible to have a combined ranking of athletic teams based upon athletic and academic success. The results of this ranking for the 2007-2010 period indicated a positive correlation between athletic success and academic success. The analysis also revealed that the top performing football schools are more likely to ensure that their student-athletes stay in school and maintain their eligibility. This makes sense because such schools have the most to lose financially if their teams are not eligible for post-season play. These teams also spend a considerable amount of time, effort, and money recruiting promising student-athletes. They also generate a substantial amount of money and provide a high level of academic support and facilities for its student-athletes in order to help them maintain their eligibility. Such a high level of support may not be available for student-athletes at poorer performing schools with fewer resources devoted to their academic well-being.
Additionally, many of these large and successful schools offer specific programs geared to the student-athlete population, thus increasing the likelihood of their academic success. Similarly, these schools may also be less likely to take a chance on a potential student-athlete with significant athletic potential, but with very little chance of academic success at their school.
The ranking of FBS teams was based upon their success in the classroom and their performance on the football field. Rankings for other collegiate sports, both male and female, including the non-revenue sports could easily be obtained using the methodology described. In addition, it would be of interest to identify the key factors that lead certain teams to have high APRs, while other teams have low APRs. Such an investigation is the subject of future research, and should be of value to numerous groups including the NCAA as it continues with its academic reform movement.
APPLICATIONS IN SPORT
Critics and some members of the NCAA have argued that the organization should increase its emphasis on the academic well-being of its student-athletes. This has led to the academic reform movement that has taken place in recent years. With the methods presented here, the NCAA could recognize schools that excel both on the field and in the classroom. It also gives member schools a scorecard as to how well its team is doing on and off the field. The results also indicate that the schools with the weakest football teams in the non-BCS conferences, often times, are also the ones whose teams have the lowest multi-year APRs. Reasons for these differences should be investigated so that corrective actions can be undertaken which will enable all student-athletes to increase their likelihood of academic success.
1. Comeau, E. (2005). Predictors of academic advancement among student-athletes in the revenue-producing sports of men’s basketball and football. The Sport Journal, 8. Available at: http://www.thesportjournal.org/article/predictors-academic-achievement-among-student-athletes-revenue-producing-sports-mens-basketb.
2. Crotty, J. M. (2012). When it comes to academics football crushes basketball. Available at: http://www.forbes.com/sites/jamesmarshallcrotty/2012/01/05/northwestern-vs-rutgers-in-bcs-championship-if-education-was-yardstick/.
3. DeBrock, L., Hendricks, W., & Koenker, R. (1996). The economics of persistence: Graduation rates of athletes as labor market choice. Journal of Human Resources, 31, 513-539.
4. Harrison, W. (2012). NCAA academic performance program (APP): Future directions. Journal of Intercollegiate Sports, 5, 65-82.
5. Lucas, J. W. & Lovaglia, M. J. (2005). Can academic progress help collegiate football teams win? The Sport Journal, 8. Available at: http://www.thesportjournal.org/article/can-academic-progress-help-collegiate-football-teams-win.
6. Mangold, W. D., Bean, L. and Adams, D. (2003). The impact of intercollegiate athletics on graduation rates among major NCAA Division I universities: Implications for college persistence theory and practice. The Journal of Higher Education, 74, 540-562.
7. Mixon, F. G. & Trevino, L. J. (2005). From kickoff to commencement: the positive role of intercollegiate athletics in higher education. Economics of Education Review, 24, 97-102.
8. NCAA. (2012). Academic Progress Rate. Available at: http://www.ncaa.org/wps/wcm/connect/public/NCAA/Academics/Division+I/Academic+Progress+Rate.
9. Rishe, P. J. (1993). A reexamination of how athletic success impacts graduation rates: Comparing student-athletes to all other undergraduates – the university. American Journal of Economics and Sociology (April). Available at: http://onlinelibrary.wiley.com/doi/10.1111/1536-7150.00219/pdf.
10. Tucker, I. B. (2004). A reexamination of the effects of big-time football and basketball success on graduation rates and alumni giving rates. Economics of Education Review, 23, 655-661.
11. Wiseman, F., Habibullah, M. & Yilmaz, M. (2007). A new method for ranking total driving performance on the PGA Tour. The Sport Journal, 1. Available at: http://www.thesportjournal.org/article/new-method-ranking-total-driving-performance-pga-tour.