Imagery Use and Sport-Related Injury Rehabilitation

Submitted by Matthew L. Symonds1* and Amanda S. Deml2*

1* Associate Professor, Department of Health and Human Services, Northwest Missouri State University

2* Intramural Sports Coordinator, University of Oregon

Amanda Deml is the Intramural Sports Coordinator at the University of Oregon. She earned both her BS and MS Ed degrees from Northwest Missouri State University in Maryville, Missouri. Matthew Symonds is an Associate Professor in the Department of Health and Human Services at Northwest Missouri State University and also serves as Department Chair.

ABSTRACT

This study sought to investigate mental imagery use among college athletes during the rehabilitation process, specifically examining the use of three functions of imagery – motivational, cognitive, and healing. The Athletic Injury Imagery Questionnaire-2 (AIIQ-2) was administered to varsity athletes representing 12 varsity sports at public, regional, Masters I institutions in the Midwestern United States. From the convenience sample, survey respondents included 61 males and 82 females.  The study examined imagery use by: (a) sport and gender of current varsity athletes at the institution, and (b) between groups of respondents self-reporting as injured on uninjured. Results indicated that motivational imagery was more commonly employed than cognitive and healing imagery in the rehabilitation process. In addition, males used each function of imagery more than females. Furthermore, differences among sports concerning cognitive and healing imagery existed. No significant differences among injured and uninjured athletes and imagery use were found. The results of this study provided insight and additional perspective as to imagery use in the rehabilitation process. We recommend athletes, coaches, and athletic training personnel develop and implement imagery practices to improve athletic performance and the effectiveness of the injury rehabilitation process.

Key words: imagery, injury, rehabilitation

(more…)

2015-05-22T10:50:57-05:00May 21st, 2011|Contemporary Sports Issues|Comments Off on Imagery Use and Sport-Related Injury Rehabilitation

A New Test of the Moneyball Hypothesis

### Abstract

It is our intention to show that Major League Baseball (MLB) general managers, caught in tradition, reward hitters in a manner not reflecting the relative importance of two measures of producing offense: on-base percentage and slugging percentage.  In particular, slugging is overcompensated relative to its contribution to scoring runs.  This causes an inefficiency in run production as runs (and wins) could be produced at a lower cost. We first estimate a team run production model to determine the run production weights of team on-base percentage and team slugging.  Next we estimate a player salary model to determine the individual salary weights given to these same two statistics.  By tying these two sets of results together we find that slugging is overcompensated relative to on-base percentage, i.e., sluggers are paid more than they are worth in terms of contributing to team runs. These results suggest that, if run production is your objective, as you acquire talent for team rosters more attention should be paid to players with high on-base percentage and less attention to players with high slugging percentage.

**Key words:** Moneyball, strategy, quantitative analysis, economics

### Introduction

It is our intention to show the Major League Baseball (MLB) general managers did not immediately embrace the new statistical methods for choosing players and strategies that are revealed in the 2003 Michael Lewis Moneyball book. In particular we will show that three years after the Moneyball publication, a player’s on-base percentage is still undercompensated relative to slugging in its contribution to scoring runs.  This contradicts a study by two economists (3) who claim Moneyball’s innovations were diffused throughout MLB only one season after the book’s publication.

#### Background

In the 2003 publication of _Moneyball_, Michael Lewis (4) describes the journey of a small-market team, the Oakland Athletics, and their unorthodox general manager, Billy Beane. This team was remarkable in its ability to attain high winning percentages in the American League despite the low payroll that comes with the territory of being a small-market team. Lewis followed the team around to discover how they managed to utilize its resources more efficiently than any other MLB team. Moneyball practice included the use of statistical analysis for acquiring players and for evaluating strategies in a way that was allegedly not recognized prior to 2003 by baseball players, coaches, managers, and fans. Central to this statistical analysis is determining the relative importance of on-base percentage versus slugging percentage. By buying more undervalued inputs of on-base percentage, Billy Beane could put together a roster of hitters that would lead them to more wins on the field while still meeting its modest payroll. Although there are many other aspects of Moneyball techniques discussed in the book (e.g. scouting, drafting players, and game strategy), in this paper we will focus on whether a team can increase its on-field performance for a given budget by sacrificing some more expensive slugging performance for more, but less expensive, on-base performance. This is what we will call the Moneyball test: efficiency in the use of resources requires the equality of productivity per dollar for on-base percentage versus slugging percentage.

Hakes and Sauer (3) were the first researchers to use regression analysis to demonstrate at the MLB level just what Beane and Lewis had suggested: 1) slugging and on-base-percentage (more so than batting average) are extremely predictive in producing wins for a team, 2) players before the current Moneyball era (beginning around 2003) were not paid in relation to the contribution of these performances. In particular, on-base percentage was underpaid relative to its value. They used four statistics to predict team wins: own-team on-base percentage, opposing-team on-base percentage, own-team slugging percentage, and opposing-team slugging percentage. The regression coefficients for the team on-base percentage and slugging percentage assign the weight each factor has in determining team wins. A second regression for player salaries assigns a dollar value to each unit of a hitter’s on-base percentage (OBP) and slugging percentage (SLUG). The following statistics were used in player salary equation: OBP, SLUG, fielding position, arbitration and free agent status, and years of MLB experience. They estimated salary models each year for the four MLB seasons prior to the release of the _Moneyball_, and the first season after. The regression coefficients of OBP and SLUG assign the weight each factor has on player salary. By comparing the salary costs of OBP versus SLUG with the effect each factor has on wins the authors determined whether teams are undervaluing OBP relative to SLUG. Their results showed that in the years before the _Moneyball_ book, managers/owners undervalued on-base percentage in comparison to slugging average. In other words, a team could improve its winning percent by trading some SLUG inputs for an equivalent spending on OBP inputs. However, the year after the publication of the _Moneyball_ book, Hakes and Sauer report that on-base percentage was suddenly no longer under-compensated. A team could no longer exploit the higher win productivity per dollar of OBP because now the ratio of win productivity to cost was the same for both OBP and SLUG factors. They concluded that this aspect of Moneyball analysis was diffused throughout MLB.

The speed of this diffusion is surprising, and it does raise questions as to their methodology. For example, what if this test of the Moneyball hypothesis is misdirected? Hitters are paid to produce runs, not wins. A mis-specified statistical model can lead to erroneous conclusions. In this paper we propose a more direct test of the Moneyball hypothesis: comparing the run productivity per dollar of cost for both OBP and SLUG factors. In other words, will an equivalent dollar swap for a small increment of slugging percentage in return for a small increment of on-base percentage lead to the same increase in runs scored? If this is not the case, then a team can exploit this difference and score more runs for the same team payroll by acquiring more units of OBP in place of SLUG units. On the other hand, if the ratios are equal, MLB is in equilibrium with respect to the run productivity for the last additional units of OBP and SLUG.

### Methods

This study differs from Hakes and Sauer in three ways: 1) the focus is on run production rather than win production, 2) the designated hitter difference between the National League and the American League will be controlled, and 3) more recent data from the MLB website is used.

#### Team Run Production Model

An MLB general manager should attempt to gain the most effective combination of the on-base and slugging attributes given the amount of money the MLB team is able to spend. This will maximize the team’s run production subject to its budget constraint. The run production model on a team basis will be of the form:

RPSit = β1 + β2OBPit + β3SLGit + β4NL + eit

– RPSit = **number of runs produced by team i in season t.** This takes the total number of runs by each team for the 162 games in a season. If fewer than 162 games are played, this number is adjusted to make it equivalent to a 162 games season.
– OBPit = **on-base percentage of team i in season t.** This is found by taking the total number times the hitters reached base (or hit a homerun) on a hit, walk, or hit batsman and dividing this by the number of plate appearances (including walks and hit batsmen) for the season. This proportion is then multiplied by 1,000 in order to make it more relatable. For example, a team that reached base 350 times per one thousand plate appearances would have a 350 “on-base percentage.”
– SLGit = **slugging percentage of team i in season t.** This is the number of bases (single, double, triple, or home run) that a team achieves in a season divided by the number of at bats (excluding walks and hit batsmen). This proportion is multiplied by 1,000 in order to make it more relatable. For example, a team that achieved 175 singles, 40 doubles, 5 triples and 35 homeruns per 1000 at bats would have 410 bases per 1000 at bats and therefore a 410 “slugging percentage.”
– NLi = **dummy variable = 1 if team i is in the National League, 0 otherwise.** The American League and National League do not have exactly the same set of game rules. One difference is the American League Designated Hitter rule that allows a non-fielding hitter to bat for the pitcher.
– eit = **random error for team i in season t.** This component allows for the fact that runs produced cannot be perfectly predicted using the above variables.

#### Player Salary Model

The second regression will show how much each of the two statistics, on-base percentage and slugging percentage for individual players, is rewarded by team management for their proficiency in each category. Position dummies were employed but only the catcher and the shortstop had statistically significant increases in pay due to their contributions to fielding. The other dummy variables for position were dropped. The other factor that is included is player experience as measured by lifetime MLB game appearances. The experience factor will appear in quadratic form to allow for diminishing returns toward the end of the player’s career. This model follows the economic literature on salary models starting with Mincer (1974):

Mj = β1 + β2Gj + β3G2j + β4OBPj + β5SLGj + β6CTj + β7SSj + ei

– Mj = **salary of player j.** 2006 MLB salary in thousands of dollars.
– Gj = **MLB career games played by player j.** This measures the improvement in a player due to experience.
– Gj2 = **MLB career games squared.** In conjunction with G, a negative coefficient for G2. This will allow for a diminishing rate of improvement as more and more experience is achieved, and will even permit a decline in performance at the end of a player’s career.
– OBPj = **on-base percentage of the player.** This is compiled as an average of the 3 MLB seasons prior to the beginning of the season in which the player’s salary is put into effect (2003-2005).
– SLGj = **slugging percentage of the player.** This is compiled as an average of the 3 MLB seasons prior to the beginning of the season in which the player’s salary is put into effect (2003-2005).
– CTj = **dummy variable = 1 if the player is a catcher, 0 otherwise.** This variable is included to see if any special value is attributed to this fielding skill position.
– SSj = **1 if the player is a shortstop, 0 otherwise.** This variable is included to see if any special value is attributed to this fielding skill position.
– NLi = **dummy variable = 1 if player j is in the National League, 0 otherwise.**
– ei = **random error.** This component allows for the fact that player salaries produced cannot be perfectly predicted using the above variables.

#### Sample Selection

For the team run production, five seasons of data (2002-2006) are collected for each of the MLB teams, for a total sample size of 150 observations. Descriptive statistics for five years of 16 National League teams and 14 American League teams are given in Table 1. The mean runs scored per team during this time period is 765 per season, or 4.7 per game. The standard deviation is 76 runs, which is saying that from one team to the next the typical difference in runs per season is 76 or about 0.5 runs per game. Of particular note are the means and standard deviations of on-base percentage and slugging percentage. The mean team OBP is 334, with a typical change from one team to another of 12. For SLUG the mean is 423 and the standard deviation is 23.5.

Batting statistics from players are averaged over the course of the last three MLB seasons in order to match recent performance and salary more closely. To be selected as a player in the salary regression, the athlete must play in at least two of the last three MLB seasons (2003-2005) and play in at least 100 games each season. Another important restriction was that all players in the sample needed to have played at least six seasons at the Major League level. Before six seasons, MLB players are unable to become free agents, a very important concern for their salary. As free agents, players are permitted to seek employment from any team, commonly resulting in competitive bidding for the player’s services and a free market determination of wages. With this we have our sample of 154 hitters (free agent eligible starting players). The 2006 salaries of players and their three year MLB performance averages (prior to 2006) are given in Table 2. The highest salary in the sample is $25,681,000 and the lowest is $400,000. The mean salary is $6.2 million with a standard deviation from one player to the next of $4.89 million. The mean OBP for the players is 347, with a typical change of 34 from one player to the next. The average SLUG is 450 with a standard deviation of 65.5.

### Results and Discussion

#### Team Run Production Model

Applying ordinary least squares, the following team runs regression was estimated for the five seasons:

RPS = -908 + 2.85 OBP + 1.74 SLG – 23.0 NL + e

In Table 3 the more statistical details for the above equation (Model 1) and other versions of the run production model are shown. Model 1 is the one used in the Moneyball hypothesis, and it explains 92 percent of the variance in team runs scored. This verifies that team OBP and SLUG are extremely predictive of team runs scored. It should also be noted that the runs scored equation fit is better than the one Hakes and Sauer have for their winning equation. Model 2 drops the dummy for the National League and Model 3 adds interaction terms of NL with OBP and SLG. The differences from the first model are small. This sensitivity analysis confirms that Model 1 is the most appropriate.

We will now interpret each slope coefficient in Model 1, holding the other included factors constant. A 10 unit change in team OBP (e.g., going from 330 to 340), brings an additional 10(2.85) =28.5 team runs scored per season, on the average. A 10 unit change in SLUG brings a 10(1.74) = 17.4 more runs, on the average. Each regression coefficient, including the one for NL, is statistically significant at a 1% level. This identifies the relative importance of each hitting factor. For an incremental 10 unit change, getting on base more frequently has a bigger impact on scoring runs than getting more bases per hit. What is needed now is a determination of what these factors cost the team in salary.

#### Player Salary Model

Applying ordinary least squares the following player salary regression was estimated for the 156 starting free agent players in 2006:

SAL = -30164 + 10.28 G – 0.00321 G2 + 37.05 OBP + 36.98 SLG + 1748.1 CT + 2024.87 SS – 876.96 NL + e

In Table 4 the more statistical details for the above equation (Model 4) and other versions of the player salary model are shown. In Model 4 we see the estimated coefficients from the player salary model—the one used in the subsequent test for the Moneyball hypothesis. This model explains 55% of the variance in salaries, roughly the same as the salary equations for Hakes and Sauer. In Model 5 the NL dummy is removed, and in Model 6 the position dummies are removed. There were only small changes in the remaining coefficients compared to Model 4. This sensitivity analysis confirms that Model 4 is the most appropriate.

We will now interpret each slope coefficient of Model 4, holding the other included factors constant. A 10 unit change player’s OBP for increases 2006 salary on average by 37.05(10) = 370.5 ($370,500), and 10 unit increase in a player’s SLUG increases salary on average by 36.98(10) = 369.8 ($369,800). The coefficients for G and G2 show that experience increases salary at a decreasing rate. Both the catchers and shortstops earn higher salaries, holding OBP and SLUG constant, than the other fielding positions. The experience and hitting coefficients are statistically significant at a 1% level. The position dummies are statistically significant at a 5% level. The NL dummy is statistically significant at a 10% level.

#### The Moneyball Hypothesis

In the _Moneyball_ book small market teams like the Oakland Athletics can compete against larger market teams if they can acquire run production factors that provide more runs per dollar spent. This occurs when OBP is undervalued relative to SLUG. To see if this is the case in 2006 we will compare the two main models (Models 1 and 4). A 10 unit increase in team OBP is brings an additional 28.5 runs and a 10 unit increase in team SLUG yields an additional 17.4 runs. The salary equation reveals that a 10 unit increase in individual OBP costs $370,500, and a 10 unit increase in individual SLUG costs $369,800. At essentially the same increase in team salary (at the player level) an increase in OBP brings in 11.1 more runs than SLUG. This means that teams can achieve a higher run production at essentially the same cost by swapping 10 units of SLUG for 10 units of OBP. The ratio of run production to cost favors OBP. The Moneyball hypothesis of slugging percentage being overvalued relative to on-base percentage remains in effect three seasons after the _Moneyball_ book.

Why did our results differ from Hakes and Sauer, who argue that slugging was no longer overvalued one season after the _Moneyball_ book? We repeat our differences in methodology here: 1) using a run production model instead of a winning production model because players are paid to produce runs, not wins; 2) including a variable to differentiate the National League from the American League; and 3) using more recent data.

### Conclusions

In this paper we propose a new test of the Moneyball hypothesis using team run production in place of team wins. We clearly show that in producing runs baseball managers continue to overpay for slugging versus on-base percentage. In the 2006 MLB season, for the same payroll, a team could generate more runs by trading some SLUG for OBP. The question is, why don’t general managers recognize these results in their roster and payroll decisions? We propose several possible reasons:

1. Only small revenue market teams need to be efficient in their labor decisions.
2. Sluggers are paid for more than just their ability to score runs.
3. Moneyball techniques will take time before all teams adopt them.

Each of these answers will now be discussed. Large-revenue market teams are profligate partly in response to the pressure they feel by the fan base to produce a winner at whatever cost. By acquiring well-known free agents at high cost rather than bargain free agents who are not recognized by home fans seems a safe way to operate, even if it cuts into some profits. These well-known players tend to be the sluggers. The second reason for slugger overcompensation is that they are crowd-pleasers, and it may be more profitable (higher gate attendance and television viewership) to have more homerun hitters. This study does not attempt to measure this alternative hypothesis. Finally, Hakes and Sauer believed equilibrium between OBP and SLUG in the player market occurred in just one year after the _Moneyball_ book was published, but it is doubtful such innovation can spread throughout MLB so quickly.

> “Given the A’s success, why hasn’t a scientific approach come to dominate baseball? The answer, of course, is the existence of a deeply entrenched way of thinking….Generally accepted practices have been developed over one-and-a-half centuries, practices that are based on experience rather than analytical rigor.” (1, p. 80)

The behavioral patterns in MLB change slowly. For example, it took twelve years after Jackie Robinson joined the Brooklyn Dodgers before every team in MLB acquired African-American players on their roster, despite the large pool of talent in the Negro Leagues. The slow pace of diffusion can also be claimed for the more recent immigration of Asian players in MLB. And more to the point, batting average still receives more attention than on-base percentage in the evaluation of talent.

Finally, the adoption of Moneyball is not limited to baseball. General managers in hockey (6), basketball (8), football (5), and soccer (2) are beginning to see the same advantages in using statistical analysis to supplement or replace conventional wisdom in making decisions on personnel and strategy. Despite the Oakland Athletics’ more recent lackluster performance, Moneyball is here to stay.

### Applications in Sport

The increased use of quantitative analysis in the coaching and management of sports teams allows colleges and professional teams to make decisions based more on data driven results rather than merely tradition. “Moneyball” is often the term used to convey this decision-making apparatus, particularly when money resources, if allocated efficiently, can improve on-field performance (scoring, wins) on a limited budget.

The advantage of adopting Moneyball techniques before your rival teams may be short term, however, widespread adoption eliminate opportunities (e.g., acquisition of under-rated players) that are not also seen by other teams in your sport. But this study shows that the diffusion of Moneyball techniques is taking place slowly, creating advantages for managers who are open to this approach.

### References

1. Boyd, E. A. (2004). Math works in the real world: (You just have to prove it again and again). Operations Research/Management Science, 31(6), 81.
2. Carlisle, J. (2008). Beane brings moneyball approach to MLS. ESPNsoccer. Retrieved from <http://soccernet.espn.go.com/columns/story?id=495270&cc=5901>
3. Hakes, J. K., and R. D. Sauer (2006). An economic evaluation of the moneyball hypothesis. Journal of Economic Perspectives, 20, 173-185.
4. Lewis, M. (2003). Moneyball: the art of winning an unfair game. New York: W.W. Norton & Company.
5. Lewis, M. (2008) The blind side. New York: W.W. Norton & Company.
6. Mason, D. S. and W. M. Foster (2007). Putting moneyball on ice? International Journal of Sport Finance, 2, 206-213.
7. Mincer, J. (1974). Schooling, experience, and earnings. New York: Columbia University Press.
8. Ostfield, A. J. (2006). The moneyball approach: basketball and the business side of sport. Human Resource Management, 45, 36-38.

### Tables

#### Table 1. Descriptive Statistics for the Team Run Production Sample

RPS OBP SLG NL
Mean 765.04 332.927 423.27 0.53
Median 760.34 332.000 423.00 1
Standard Deviation 76.43 12.168 23.52 0.50
Range 387.00 63.000 123.00 1
Minimum 574.00 300.000 368.00 0
Maximum 961.00 363.000 491.00 1
Count 150 150 150 150

#### Table 2. Descriptive Statistics for the Player Salary Sample

G OBP SLG NL CT SS
Mean 1146.1 347.3 450.0 0.552 0.130 0.12
Median 1070.5 346.5 446.5 1 0 0
Standard Deviation 462.1 34.0 65.5 0.499 0.337 0.322
Range 2345.0 237.9 432.0 1 1 1
Minimum 385.0 276.1 310.7 0 0 0
Maximum 2730.0 514.0 742.7 1 1 1
Count 154 154 154 154 154 154

#### Table 3. Coefficients for the Team Run Production Models

MODEL 1 MODEL 2 MODEL 3
Variable Coefficient s t Stat Coefficient s t Stat Coefficient s t Stat
Intercept -908.00*** -17.16 941.72*** -18.46 -861.67*** -13.73
OBP 2.85*** 11.21 2.69*** 13.22 2.86*** 10.26
SLG 1.74*** 15.42 1.92*** 15.30 1.62*** 10.37
NL -23.00*** -6.26 -134.3* -1.34
(NL)(OBP) 0.275 1.06
(NL)(OBP) 0.241 0.06
Adj. R-Squared 0.921 0.900 0.923
F 568.9 661.6 343.3

*** .01 level ** .05 level * .10 level

#### Table 4. Coefficients for the Player Salary Models

MODEL 1 MODEL 2 MODEL 3
Variable Coefficient s t Stat Coefficient s t Stat Coefficient s t Stat
Intercept -30164*** -9.38 -30952*** -9.67 -27182.6*** -8.73
G 10.28*** 4.21 10.24*** 4.18 9.75*** 3.95
G2 -0.00321*** -3.67 -0.00323*** -3.68 -0.00304*** -3.42
OBP 37.05*** 3.32 38.08*** 3.39 35.30*** 3.10
SLG 36.98*** 6.47 37.01*** 6.43 33.58*** 5.88
CT 1748.10** 2.14 1798.21** 2.19
SS 2024.87** 2.34 2048.73** 2.35
NL -876.96* -1.65 -929.14* -1.71
Adj. R-Squared 0.557 0.552 0.532
F 28.48 32.39 44.44

*** .01 level ** .05 level * .10 level

### Corresponding Author

#### Thomas H. Bruggink, Ph.D.
Department of Economics
Lafayette College
Easton PA 18042
<bruggint@lafayette.edu>
610-330-5305

### All Authors

#### Anthony Farrar
Brinker Capital
Berwyn, PA

#### Thomas H. Bruggink
Lafayette College
Easton, PA

2013-11-25T16:28:11-06:00May 20th, 2011|Contemporary Sports Issues, Sports Coaching, Sports Management|Comments Off on A New Test of the Moneyball Hypothesis

Information Technology and Sports: Looking Toward Web 3.0

### Abstract

From the founding of the Olympic movement in the late 19th century at the height of the Industrial Revolution through the beginning of the Information Age in the 1970s, channels of media distribution evolved from primarily written tracts in publications to electronic broadcasting. The changes in the mode of information distribution and the underlying technology over time caused the message content being promulgated to similarly change. As there were comparatively few channels available for the distribution of content during this period, a relative few individuals served as “gatekeepers” on the flow of information. These gatekeepers, such as editors and producers, exercised extraordinary control over what information entered the public domain through a process that was largely autocratic. The Information Age has changed the paradigm of information dissemination, and in so doing, has democratized the process of sharing information. The participation of the public-at-large in the development and dissemination of information that shapes humanistic ideas has grown in scale to a size unprecedented in human history. Since the advent of the Internet, this human discourse has changed over time driven both by the application of new technologies together with the exponential growth in that portion of the population that has access to them. Perhaps the most significant development in this movement was the development of the World Wide Web (the web). As the web has moved from comparatively static Web 1.0 content through the development of Web 2.0 social media applications to the beginning of Web 3.0 practices, there have been significant changes in how humans use computer technology to interact with one another. Despite the positive changes that have been brought about by the development of these technologies, such as a democratization of the information sharing process, there are still negative aspects to social media applications. There will also be significant challenges ahead in the development of new communication technologies that must be overcome before the full promise of the Internet can be realized by all.

**Key words:** Olympic movement, social media, Internet, web, technology, humanistic ideas

### Introduction

Human play, as embodied in sports, is one of the most important expressions of human culture. It can be said that the games people play in a society are a reflection of the society as a whole. It can also be said that communication is the one dominant attribute that distinguishes human beings from every other species on the planet. Thus, the intersection of communication and sports in the human experience is an important one.

The Olympic movement is considered to be one of the largest social movements in human history. Nowhere else do the countries of the world gather in one place as they do during the Summer Olympic Games. While the peaceful gathering of the world’s youth for sports competition is the embodiment of that intersection of sport and communication, this fact underscores the importance of the media in conveying Olympic values and ideals. In many respects, it is a relationship between the Olympic community and the media that allows the Games to be conducted on the scale that they are.

This presentation will briefly examine the evolution of this relationship from the founding of the Olympic movement at the height of the Industrial Revolution to the dawning of the Information Age. The discussion of the early days will necessarily be brief as the primary focus of this presentation is on the ways that technology, and more specifically the Internet, is driving the communications process and with it the dissemination of the human ideals. There will be a discussion of some of this new media and the presentation will conclude with some of the challenges before us, as we look to the future being wrought through technological change.

#### Evolution of Media

As has already been noted, the Olympic movement was founded at the height of the Industrial Revolution in the late 19th century. The founder of the Olympic movement, Baron Pierre de Coubertin, authored many articles arguing for the establishment of a modern Olympic Games. An example of this effort was the publication of an essay in the “Review de Paris” in June 1894 – on the very eve of the first Olympic Congress – setting out his vision for the establishment of a modern Olympic Games (Guttman, 1992).

Writing in the 19th century was a lengthy process, meaning that 19th century writers faced a much longer period than happens today, between researching, writing, and receiving payment for their work. Only the best educated individuals, usually from privileged backgrounds, had the time, expertise, talent, inclination, and financial backing to undertake this effort (2). Illustrated news weeklies or monthlies were among the primary means of communication and dissemination of the news in the late 19th and early 20th centuries. This medium was also one that was particularly well suited to the audience that de Coubertin was trying to reach. The founders of the Olympic movement were well educated and well-to-do. Therefore, the message to this audience leant itself well to the tenets of the early games that they should only be open to amateurs; those who participated in sport as an avocation as opposed to a vocation (4).

However, the on-going Industrial Revolution was bringing about important society-wide changes that allowed sports to flourish. This included a population migration from rural to urban centers, increases in disposable income accompanying a rise in the middle-class and eventually, more leisure time that allowed more recreational activities, among them participation in and the viewing of sports events.

Concurrent with the rise in the middle class was a wider distribution of newspapers, many of which began to include sports coverage. Sports coverage did, in fact, become one of the ways that newspapers in larger metropolitan areas competed with each other. As interest in sports generally, and local teams particularly, began to appear in newspapers, the amount of space given over to this content expanded over time. As there were no broadcast media in these early days, the newspaper sports coverage of the day was largely descriptive play-by-play recaps of the sports events.

Eventually, however, broadcast media was introduced to the communications mix and began to usurp the role historically played by the newspapers. First radio, and later television, allowed the audience to experience the sport events as they occurred with their play-by-play broadcasts. Thus, the role of the newspapers and weekly or monthly sport-themed news magazines began to evolve from reporting the play-by-play, now done by the broadcast media, to more reporting of “behind the scenes” activities or analysis of the athletes, teams and events.

There are two lessons to be learned from this experience. First is that as technology evolved and new forms of communication emerged, message content carried in the channels of distribution changed as well. So, too, is this the case today; as technology evolves, so does the nature of the message content being distributed.

The second lesson concerns the role of “gatekeepers” such as editors or producers in the public communications process. During this early period, there were comparatively few media outlets. In Europe, countries might have one or two “national” newspapers, plus those in the metropolitan areas. In the United States, there was no general national newspaper until the advent of “USA Today” in 1982. While larger metropolitan areas may have as many as five news dailies, most of the country had smaller markets that could support no more than one or two. In terms of electronic broadcasting, the available air time for sports was typically limited, since most outlets aired a variety of content. Also in the early days of television in the United States, there were only three major television networks. Because of the limited availability of channels of distribution, editors in the newsroom or producers of over-the-air broadcasting wielded enormous power in determining what their audience would read or hear. The selection process for media was typically driven by market concerns; but in any case was decidedly autocratic.

#### The Information Age and Rise of the Internet

Human civilization has moved from the Age of Industry to the Information Age. While the general consensus is that the dawn of the Information Age is the 1970s, the changes wrought to society through technological change really accelerated with the creation of the World-Wide-Web (the web). As changes in technology changes channels of communication and message content, a brief discussion of the underlying technology is in order.

The early 1960s saw experimentation with computer technology that established the protocols for what became known as the Internet in 1969. This feat was followed by the development of Hypertext Mark-up Language (HTML) in 1989 that became the basis for the development of the web, though it was not until 1993 that the web was introduced to the public-at-large.

Most early websites were a series of static web pages connected by hyperlinks that could be internal, which provided structure to a website, or external leading to other websites based on whatever criteria the webmaster decided. The underlying computer technology such as processors, memory, and connectivity limited the content of these early web pages. Most hosts, or the site where the web content was posted, were initially personal computers (PCs) adapted for this purpose, although eventually specialized computer devices called “web servers” evolved. Over the years, the capability of these website servers has changed dramatically as has the role of the webmaster. Today, virtually all commercial or professionally developed websites are dynamic with the web content contained in a relational database called “the backend.” Most websites also have a variety of plug-in applications, such as secure financial transaction software for ecommerce, called “middleware,” and the front facing graphic interface that people see when they arrive at a website. Webmasters have evolved into web developers and the skills required for maintaining a website can vary significantly between those working the backend and those designing the frontend.

On the recipient’s end were similar technological limitations by PC’s that had their processor capability expressed in numbers such as the 286, 386, 486 and Pentiums. In terms of connectivity, bandwidth has increased exponentially with a succession of changes from dial-up modems to ISDN and now broadband. Thus, early on the limitations of technology necessarily limited the content; e.g. the message.

Over the past 30 years, society has experienced a fundamental change in the way information is created and disseminated. From its rudimentary early beginning, the interface between computer technology and users has evolved to a point where virtually anyone can create “media content” and post it to the web where it can be accessed and read by anyone in the world with access to the computer resources to do so. This has led to another fundamental and extraordinarily significant change: a process of democratization. No longer can gatekeepers such as the editors or publishers of the old media exert autocratic or monopolistic control over the flow of information into the public sphere. There are, however, both positives and negatives to this state of affairs as we shall see in our ensuing discussion of the evolution of the web.

#### Web 1.0 – The Inaugural Web

During the formative days of the web, strategies for the dissemination of information could be broadly classified as “push” versus “pull.” Push refers to the proactively sending out or distributing messages across the Internet most commonly by email from one user’s account to another. One of the ways in which email was used as a precursor to today’s Web 2.0 applications, such as blogs and social networking sites, was the listserv. A listserv was a group of individuals typically bound together by a common interest who signed onto an email list to receive messages on a topic of mutual interest. When an email was sent in bulk to the list, anyone in the group could respond to the sent message which subsequently went to everyone else in the group. In so doing, an online discussion and sharing of ideas would ensue.

Unfortunately, the widespread abuse of email has gradually restricted its utility as a medium of communication exchange beyond personal messages. Both marketers and criminals seized upon email as a means to try and sell their wares or dupe people into giving up money which gave rise to the spam phenomenon. Unfortunately, spam is still a plague on the Internet with an estimated 48.5 billion messages sent everyday largely through networks of compromised computers called botnets. In March 2011, one of the largest of these, the Rustock Botnet that was sending as many 13.82 billion spam emails each day, was finally taken down by the authorities (8). Partially as a consequence of this abuse, more and more people are seeking out alternative channels for the sharing of electronic communications, such as through the messaging capabilities of Facebook or Twitter.

The other concept is that of “pull” in which individuals actively seek out web content utilizing web browsers and devices such as search engines. The key to this strategy is to insure that this web content is properly optimized and has appropriate tags, so it becomes more visible on the web and easier to find.

Education is the most powerful vehicle for the transmission of human ideals. It is in the realm of education that the Internet has had a profound impact. The advent of the Internet and the worldwide web has fundamentally changed the paradigm of education; a paradigm that had essentially been unchanged since the 16th century. Early on, the Academy embraced this change and developed a distance education program that can be defined as asynchronous, transformational, and computer mediated. This means that the Academy’s students can pursue their studies across the Internet using computer resources at any time and from any place without the faculty and student needing to present online at the same time. While removing impediments to learning created by time and space, the institution has transformed the traditional educational experience of the lecturer in the classroom to learning activities distributed through the web in which learning outcomes and course objectives are satisfied.

There has been a lot of skepticism with respect to the efficacy of online education. The validity of the model has been validated by the Academy’s own research among which has been the comparison of comprehensive examination results between resident and online students. The institution’s accrediting agency, Southern Association of Colleges and Schools, reviewed and approved the Academy’s distance education program in 1996, and currently more than 85% of the Academy’s students report that they have learned as much or more through online education as they did in resident study. The Academy is also pleased that more than 96% of its students would recommend the Academy’s online education programs to friends or colleagues.

Illustrative of this approach to education is the Olympic Values Education Program (OVEP) that was prepared for distance learning delivery by the Academy under a grant from the International Olympic Committee (IOC) in 2008. Through the web, the OVEP program is available to anyone in the world who has access to the Internet, and further utilizing emerging technology, such as the Google Universal Translator, albeit with some inherent limitations, it can be accessed in any one of 52 different languages. The online OVEP course can be reached at students.ussa.edu/Olympic_values. I should also note that the Academy recently completed another such cross-cultural academic offering with the preparation of a bachelor’s degree course entitled the “Shaolin Philosophy of Kung Fu.” The basis for the course is a 1,500-year-old manuscript that was translated from the ancient to the modern version of Chinese and then into English. The Academy’s Department of Instructional Design then refined the English and placed it into an online course environment. In so doing, East meets West, the ancient meets the new and we come full circle insofar as the modern English course can be translated back into Chinese with the universal translator function built into the Academy’s Course Management System (CMS).

Very important in the supporting of student education and the dissemination of human values is access to libraries and research resources. In 1997, the Academy was among the first organizations to put online a peer-reviewed research journal – [_The Sport Journal_](http://www.thesportjournal.org). This Journal is provided subscription-free to the public and is accessed on average about 15,000 times per week. As a matter of interest, all of the papers from last year’s International Olympic Academy (IOA) were posted to The Sport Journal site in a special Olympic edition of the Journal. From the comfort of their own homes, the Academy’s students can use the Internet to access more than 57,000 libraries in 112 countries that have more than 70 million holdings and 270,000 unique journals through the institution’s library portal on its website. However, access to educational resources, such as libraries, are not restricted to students in universities. Very early in the development of the web, the Encyclopedia Britannica posted its entire body of work online and made it available on a subscription basis. Today, there are a myriad of libraries to which the public has access free-of-charge, such as the Alabama Public Online Library. Organizations such as Google are digitizing the holdings of entire research libraries with the ultimate intent of placing these online for ease of access; though inevitably at a price.

Web 2.0 – The Social Web
The rise of participatory information sharing through the Internet has truly revolutionized the dissemination of information using web 2.0 techniques. With the advent of the social web, the creation of content has evolved from the efforts of a comparative few in the media professions to a model that maximizes the contributions of the multitudes. With about 400 social media platforms available and an untold number of blogs being authored, the proliferation of communication channels, both public and professional, and private and amateur, allow for the contribution of millions of people sharing a public conversation unprecedented in the human experience. One of the most important consequences of the proliferation of these platforms available to virtually anyone with access to the Internet, is the democratization of media content. What people can see and hear has been taken out of the hands of the gatekeepers and placed into the hands of society at large.

It is not possible within the constraints of this presentation to cover all aspects of the social web, so the author has selected five representative examples beginning with a discussion of Wikipedia. If the Encyclopedia Britannica, long acknowledged as a definitive compendium of human knowledge, represents Web 1.0 technology in which content is simply posted and accessed by people through subscription, Wikipedia represents a web 2.0 application because of its collaborative nature insofar as anyone can submit articles for inclusion.

Ironically enough, I have turned to Wikipedia for a definition of itself, though I should note that at the Academy there is a prominent notice posted on the library portal that Wikipedia is not considered an appropriate source of citations for research papers for reasons that will be explained. By its own definition, Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wiki Media Foundation. Its 18 million articles (over 3.6 million in English) have been written collaboratively by volunteers around the world, and almost all of its articles can be edited by anyone with access to the site. Wikipedia was launched in 2001, and has become the largest and most popular general reference on the Internet ranking seventh among all websites on Alexa.com (a web statistics reporting site) and boasting 365 million readers. (Wikipedia, 2011)

The reason that Wikipedia has not been widely accepted in academic research has its roots in its early days. The articles submitted at that time frequently were not carefully researched, often inaccurate, and sometimes posted with malicious intent. It is significant to note that many of these issues have been addressed through the use of anonymous reviewers who examine submissions from the general public for both accuracy and appropriateness. Nonetheless, it still remains a very important resource insofar as researchers, especially the youngest, still access Wikipedia as a point of departure in their research to give them ideas on where to go for additional information.

For those of you who have entries in Wikipedia, it is worth your time to periodically check the content to ensure that someone has not submitted inaccurate or even malicious information. Further, and especially given the reach of Wikipedia, it affords organizations the opportunity to promulgate their missions and activities. For example, in the entry on Olympia, the article posted there cites its role in the ancient Olympic Games and presents a chronology of the site by era to the present day. It does not, however, mention the IOA. A submission could be authored for consideration and inclusion on how Olympia serves as the site of the IOA along with a description of the IOA’s mission and function.

One of the true phenomena of the last few years in Web 2.0 technology is the rise of Facebook as suggested by Internet usage statistics posted on Alexa.com. In April 2011, more than 40% of all global Internet users visited Facebook on a daily basis, a rate of usage that has remained consistent over the past three months.

Facebook represents the power of social media as individuals sharing common experiences are provided a platform through which these experiences or interests can be shared. As friends beget friends, the media content on Facebook expands in ever increasing circles. This content is not limited to posts or messaging, but also includes YouTube video clips, decidedly unscientific opinion polls, and games. Additionally, the messaging function built into Facebook has, in many circles, replaced email as the preferred means of interpersonal electronic communication.

Facebook can be a double-edged sword, as the most decorated Olympic athlete of all time found out much to his chagrin. This individual, who won a record eight gold medals in the 2008 Beijing Olympics, suffered the consequences of the posting of a photograph to Facebook of him consuming illegal recreational drugs. This incident sullied his image and reputation and cost him millions of dollars in endorsement revenue. The irony is that the picture posted was not posted on his personal Facebook page, but on that of another individual who happened to be at the same party. In this instance, the interconnectivity of the medium produced dire consequences for a sports hero and role model. This incident also underscores the need to be circumspect with what one posts to social media sites. A good guideline is not to post anything you would not want to see in a newspaper. It is not uncommon for prospective employers, among others, to search out Facebook pages in an effort to gain insights on a given individual.

Another extraordinarily popular site, and one already mentioned, is YouTube. Founded in February 2005, viewership on YouTube exceeded two billion views per day in May 2010. YouTube allows viewers to watch and share originally created videos and provides a forum for people to connect, inform, and inspire others across the globe and acts as a distribution platform for original content creators and advertisers large and small (YouTube, 2011). Alexa.com reported in April 2011 that YouTube is the third most visited global website receiving just over 26% of daily website visits over the past three months.

YouTube, whose web interface is available in 42 languages, can be accessed by anyone, although those individuals who want to post content on the site must be registered. For regular users, the time limit for any one post is 15 minutes. Posting video content there can be accomplished from a wide range of devices from computers to mobile phones. YouTube video posts spread across the entire Internet by appearing as links in emails, posts on other social media platforms, such as Facebook and in blogs. Periodically, a video on YouTube will “go viral,” which simply refers to a phenomenon in which the content captures the public’s imagination and is promulgated through a vast array of distribution channels.

However, sites such as YouTube pose a recognized threat to the business model of many sports organizations. The blogging and social media rules of the IOC specifically proscribe the posting of “moving images” or sound. While these guidelines can be enforced on accredited individuals to the Games, such as national delegations or the media, it is much harder to do with spectators seated in the stadium. Modern 3G or 4G phones can easily capture video of sporting events from the stadium seat, and the video can be uploaded to YouTube through a user’s account. While such activity violates the terms of service for registered account holders, the process for removing the content and terminating a user’s account can sometimes be a lengthy one. In the meantime, to the extent to which the video has been accessed and distributed through posts on other social media web sites or platforms, it can never be removed from the web in its entirety. Obviously, this is a major issue for media companies that may pay as much as billions of dollars for exclusive media rights to the event.

Another social media phenomenon is Twitter and, in fact, the Winter Games in Vancouver were cited as the first “Twitter Olympics” (5). The Twitter posts, called tweets, of the athletes provide insights into their physical and mental preparation for competition, their reactions to being in the Olympic Games and other aspects of the Olympic experience that simply were not possible in the past through traditional media outlets. Twitter allows for the sharing of the human experience with an unparalleled immediacy and intimacy with potentially vast audiences that is not tempered with the interference of a gatekeeper. Many tweets generated by Olympians at the Vancouver Winter Games can be found on the web by simply “Googling” Olympic athlete tweets.

However, as was the case with Facebook, Twitter can also be a double-edged sword. There have been instances where athletes have posted comments denigrating their competition, the officials, and even their teammates or coaches. These actions can create dissension on teams and when comments go viral, they can take on a life of their own and stir considerable controversy and unfavorable comment in the press. This has occurred frequently enough that some teams ban their athletes from using Twitter, while other teams such as that of the Australian Olympic Team provide their athletes with training on the appropriate use of the medium.

Lastly, I would like to touch on “blogging” as a medium for the dissemination of the human experience. A blog can be thought of as an online diary, open to the public, and onto which an author can write on any topic they choose and to which anyone who reads the post can, in turn, reply. These blogs typically focus on a particular topic such as politics or sports and there are blogs on virtually every topic imaginable. Taken altogether, these blogs are referred to as the “blogosphere.”

With all of the attention that this form of human endeavor engenders and the emotion that it evokes, sports are a common topic in the blogosphere. As one might expect, the blogging commentary related to sports can be both positive and negative. Frequently the authors of blogs do not have the professional or academic preparation to speak knowingly about which they write. The unfortunate thing about blog posts that are inaccurate is that they often carry more weight than they deserve. Illustrious of this situation is the phrase, “it must be true, I read it on the Internet.” The Academy is seeking to address this situation in some small measure through its decision to change one of our online publications, [_The Sport Digest_](http://thesportdigest.com), into a blog. Through this effort, Academy faculty and other well regarded individuals in the profession generate articles on a host of issues surrounding the sport profession. These posts have a basis in fact or are otherwise well-reasoned and as is the case with other blogs, afford the readers an opportunity to respond to the issues.

#### Web 3.0 – The Semantic Web

While the term Web 2.0 has entered the lexicon, Web 3.0 will be the next step in the evolution of the Internet. A common, agreed upon definition for Web 3.0 has yet to emerge but a consensus is building that it will be a combination of technology through which the entire web is turned into a database combined with the marshaling of human resources. New computer languages such as HTML5 will allow computers to read online content and so will facilitate the identification and indexing of the web, a process that will make content more accessible.

Beyond the changes in technology, renowned web futurist Clay Shirky argues that for the first time in history the web has provided the tools to harness society’s “cognitive surplus.” Essentially, the cognitive surplus is derived from the trillions of hours of free time that the residents of the developed world enjoy and that has steadily increased since World War II. Increases in gross domestic product, education, and life span have provided riches of free time but that prior to the Internet was squandered in non-productive pursuits. The Internet democratized the tools of production and distribution and the Internet made the benefits scalable: value comes from the combined cognitive surplus of millions of individuals connected to a network that allows collaboration. (1)

Shirky is an example of this dynamic at work. In the course of researching this paper, the author continuously came across references to Shirky and his theories of cognitive surplus. As more authors agreed with the concept than those that did not, it suggests that these theories are gaining traction and apparently have some merit. Through this process of review and debate, concepts and theory are continually refined adding to the body of knowledge through which the human condition can be enriched.

#### Challenges

With all of its potential to elevate human discourse and to assist in the dissemination of human ideals, many challenges remain. This can fall into three broad areas as follows:

The first is economic. There exists in a very real sense a digital divide in which a vast proportion of the worlds’ population remains without access to computers or the Internet. In many respects, the Internet still remains a world of the “haves and have nots.” In some respects we have almost come full circle to the human condition of when Olympic movement first began in the late 19th century in which access to information was the domain of the privileged few. This fact has been recognized and there are efforts to address this imbalance through the production of low-cost machines to allow the underserved populations without the necessary economic resources to gain access to the Internet.

A looming issue is a social one. Governments all over the world took note at the “Jasmine Revolution” in Tunisia and the events in Tahrir Square in Egypt and the role that Web 2.0 applications played in mobilizing the population to overthrow the political establishment. In the most populous country of the world, the two most globally accessed websites everyday cannot be reached at all. So in a very real sense, we could be headed to a world of two Internets; one in which the flow of information is free and unfettered, and another where access to information resources are tightly controlled or restricted to what the government believes to be “politically acceptable.” (7) In the West, the Internet has played a role in self-censorship resulting in societal fragmentation and polarization insofar as people have a tendency to seek out and read only that information that reinforces their points of view. If the ability to share information is deemed to be a strength, impediments to the free flow of information can only be deemed to be a detriment in a future of shared human values.

The last issue is technical. Computers as we know them – those bulky desktop machines and even portable laptops – are going away. What is going to occur in the future will be a proliferation of smaller devices such as tablet computers, iPhones, and Androids that provide access to the Internet, but where the information that they generate is stored on the Internet itself (also called the cloud). However, all of these devices require wireless connectivity and the amount of electromagnetic spectrum through which these connections are made is a finite resource. In June 2009, the U.S. Government took back that portion of the electromagnetic spectrum through which analog television signals were broadcast. This spectrum was subsequently auctioned off to telecommunication providers and others such as Google. But the fact remains that in the not-too-distant future this bandwidth will also be exhausted. All of this is setting the stage for a time in which data consumption will be metered as is any other utility and subject to the laws of supply and demand (3). Thus, if the digital divide was created by economic conditions, the situation can be exacerbated by “metered Internet access.”

The solution will be found both in the technical, such as content providers better streamlining their services, or through the creation of better means by which access is gained, such as twisting the wireless signals.

### Conclusions

Information technology has unquestionably changed human society in ways that can scarcely be imagined. From early experiments in the 1960s to today, the Internet, as embodied in the web, has over 171 million web hosts. Assuming an average 100 pages per website (the Academy website has more than 800 pages) would yield an estimated 17.1 billion pages of web content, the vast majority of which can be accessed by anyone. Research shows that the Internet, excluding the deep web, is growing by more than 10 million new static pages every day. (6) Thus, the Internet spans virtually the entire gamut of the human existence and can be a powerful medium for the conveying of humanistic ideas. It has provided a vehicle that can educate and entertain us and can serve to make society more cohesive. In so doing, it has created an environment for public discussion unequaled in human history but at the same time, it can also serve to isolate us from each other. People can immerse themselves in an environment where the virtual becomes reality and normal communication with others slowly becomes lost. In any case, the evolution of the Internet has brought about a democratization of media content and has created an environment in which all can participate. It is, as the title of a popular novel suggests, “A Brave New World.”

### Applications in Sport

On our way to Web 3.0, it is critical that we participate in this powerful medium and spread humanistic ideas and Olympic values across the world. The Internet has provided a vehicle that can educate and entertain us and can serve to make society more cohesive. However, despite the potential to elevate human discourse, challenges remain, such as the digital divide that prevents much of the worlds’ population from accessing the Internet, tightly controlled or restricted access by some governments, and technical obstacles that limit wireless connectivity. In any case, the evolution of the Internet has brought about an unprecedented democratization of media content and has created an environment in which all can participate and make a difference.

### References

1. Davis, P. (2010). Here Comes Everything: A Review of Clay Shirky’s Cognitive Surplus. Shareable: Work and Enterprise. <http://shareable.net/blog/here-comes-everything-a-review-of-clay-shirky%E2%80%99s-cognitive-surplus>. (13 July, 2010).

2. Harper, A. (2007). 19th Century Magazine – An Amazing Source of Public Domain Information. Ezinearticles. <http://ezinearticles.com/?19th-Century-Magazines—An-Amazing-Source-of-Public-Domain-Information&id=762208>. (3 October, 2007).

3. Gruman, G. and Kaneshige, T. (2008) Is Our Internet Future in Trouble? InfoWorld. <http://www.infoworld.com/d/networking/our-Internet-future-in-danger-715>. (11 November, 2008).

4. Guttmann, A. (1992). The Olympics; A History of the Modern Games. (2nd. Ed.). Champaign-Urbana: The University of Illinois Press. 13 Ibid. 14.

5. Mann, B. (2010). Olympians On Course Using Twitter. MarketWatch Blogs. <http://blogs.marketwatch.com/vancouverolympics/2010/02/10/olympians-on-course-using-twitter/> (10 February, 2010).

6. Metamend. (2011). How Big is the Internet? Metamend. <www.metamend.com/Internet-growth.html>. (14 April. 2011)

7. McMahon, R.; Bennett, I. (2011). U.S. Internet Providers and the Great Firewall of China. Council on Foreign Relations. <http://www.cfr.org/china/us-Internet-providers-great-firewall-china/p9856>. (23 February, 2011)

8. Slashdot. (2011). Spam Drops 1/3 After Rustock Botnet Gets Crushed. Slashdot IT Blog. <http://it.slashdot.org/story/11/03/29/1516241/Spam-Drops-13-After-Rustock-Botnet-Gets-Crushed>. (29 March, 2011).

9. Wikipedia (2011). Wikipedia. Wikipedia. <http://en.wikipedia.org/wiki/Wikipedia>. (24 March, 2011).

10. YouTube (2011). About YouTube. YouTube. <http://www.youtube.com/t/about_youtube>. (23 March, 2011).

### Corresponding Author
T.J. Rosandich, Ed.D
One Academy Drive
Daphne, Ala., 36526
<vicepres@ussa.edu>
251-626-3303

### Author Bio
Dr. T.J. Rosandich serves as Vice President for the United States Sports Academy, where he earned both his master’s and doctoral degrees. In addition to having oversight responsibility for the Academy’s administrative and financial functions, he also chairs the Technology Committee and is responsible for international programs. Dr. Rosandich rejoined the staff at the Academy’s main campus in 1994 after spending nine years in Saudi Arabia, where he was general manager of Saudi American Sports.

2013-11-25T16:29:38-06:00May 3rd, 2011|Contemporary Sports Issues, Sports Facilities, Sports Management|Comments Off on Information Technology and Sports: Looking Toward Web 3.0

Raising Awareness of the Severity of Concussions

### Abstract

Concussions have always been a part of physical contact sports, but with athletes becoming bigger and stronger, something has to be done to raise awareness of the severity of concussions and what can happen later down the road if athletes are not given the adequate amount of time to recover. The National Football League has already put regulations on how long a player has to stay out after receiving a concussion and has started fining athletes that deliberately use helmet-to-helmet contact on an opposing player; the National Collegiate Athletic Association has started neurological testing to track a concussed athlete’s progress and have revised the guidelines on not letting athletes return to play the same day and having mandatory check-ups; but high schools have very few regulations to follow. A concussion is the same whether it happens to a pro player or a high school player, so why do the professional players take precedence over high school athletes? Changes need to be made so all athletes are cared for.

**Key Words:** concussions, helmet-to-helmet contact, National Football League, National Collegiate Athletic Association, neurological testing

### Introduction

Owen Thomas, junior lineman for University of Pennsylvania, Andre Waters, former Philadelphia Eagles safety, Chris Henry, the Cincinnati Bengals wide receiver and Chris Beniot, a pro wrestler; these men have been successful athletes, but that all changed after receiving countless blows to the head. They, as well as many others, have been diagnosed with Chronic Traumatic Encephalopathy (CTE), which according to the Center for the Study of Traumatic Encephalopathy is a progressive degenerative disease of the brain found in athletes and others, with a history of repetitive concussions. The brain degeneration is associated with memory loss, confusion, impaired judgment, paranoia, impulse control problems, aggression, depression, and, eventually, progressive dementia (2). After death, these four athletes had tissue from their brain examined, where each had evidence of CTE.

Helmet-to-helmet hits are becoming more aggressive, take for example the hit that Kevin Everett experienced in 2007, or the hit that Josh Cribbs received from James Harrison, and the memorable hit of Eric LeGrand that left him paralyzed from the neck down. Because of this the National Football League (NFL) and the National Collegiate Athletic Association (NCAA) have recently implemented rules to protect players from injuries that occur through these hits, but what about the high school athletes? The University Interscholastic League (UIL), which is the governing body of high school athletics in Texas, has started to take steps in changing the policies and guidelines that are currently being followed, but that isn’t enough.

#### National Football League

The new guidelines for the NFL provide more specificity in making return-to-play decisions. The new statements advise that a player who suffers a concussion should not return to play or practice on the same day if he shows any signs or symptoms of a concussion that are outlined in the return-to-play statement. It continues to say the player shouldn’t return to play until they have had neurological and neuropsychological testing completed and have been cleared by both the team physician and an independent neurological consultant (1). It is also outlines that if an athlete has symptoms of loss of consciousness, confusion, gaps in memory, persistent dizziness, headache, nausea, vomiting or dizziness, or any other persistent signs or symptoms of concussions the athlete should be removed from all activities (1).

#### National Collegiate Athletic Association

According to the NCAA a concussion is a brain injury that may be caused by a blow to the head, face, neck, or elsewhere on the body with an “impulsive” force transmitted to the head (9). An athlete doesn’t have to lose consciousness after a concussion occurs, but there are two things that a coach and athlete need to watch for: a forceful blow to the head or body that results in rapid movement of the head, and any changes in the student-athlete’s behavior, thinking or physical functioning. Some of the signs and symptoms that have been observed by both the coaching staff and student athletes consist of the student-athlete appearing dazed and confused, forgetting plays and being confused about assignments, while they have a headache, feel nauseated, confused, and are sensitive to light and noise (9).

After meeting, the NCAA committee that is responsible for recommending rules and policies made revisions on the previous guidelines found in the NCAA Medicine Handbook that all sports followed on concussion management. These revisions emphasize not letting a student-athlete return play the same day after a long duration of significant symptoms, and if the symptoms continue the athlete should not participate until cleared by a physician (3).

The NCAA wants all coaching staff and student-athletes to have full awareness of the severity of concussions, in doing so they have produced fact sheets for both, which recommend that athletes not hide it and that they tell the athletic trainer or coach so they can receive the proper treatment, and take time to recover. Just like every other injury, a concussion needs time to heal, and repeated concussions can cause permanent brain damage, and even death (9).

For Tarleton State University, located in Stephenville, Texas, neuropsychological testing is being done using ImPACT, which measures athlete’s attention span, working memory, sustained and selective attention time, response variability, non-verbal solving, and reaction time. ImPACT also provides computerized neurocognitive assessment tools and services that are used by coaches, athletic trainers, doctors, and other health professionals to assist them in determining if an athlete is able to return to play after suffering a concussion (6). Athletes start out taking the test to set a base line, they are asked demographic information and health history, what their current symptoms are, then take the neuropsychological test, which measures athlete’s attention span, working memory, sustained and selective attention time, response variability, non-verbal solving, and reaction time with six different modules that are labeled as Word Memory, Design Memory, X’s and O’s, Symbol Matching, Color Matching, and Three Letter Memory, they then get the injury report, and the ImPACT test scores (6). ImPACT is being used by the U.S. Army, professional teams, sports medicine centers, neuropsychology clinics, doctors, colleges, high schools, and club teams all across the United States, as well as Canada and Internationally. Tarleton State University has also required full participation of their athletes by informing them of concussions and having them sign an injury acknowledgement form, stating that they will be an active participant in their own healthcare. Tarleton has also stepped up in making the academic department aware of the severity of a concussion by producing information sheets that state the signs and symptoms, how a person recovers, and what a person with a concussion should and shouldn’t do.

#### High School

According to USA Today only Texas, Oregon, and Washington have enacted laws, all since 2007, to meaningfully tackle the issue. Oregon and Texas require athletes to be removed from play the day of the injury, while Washington gives coaches responsibility for removal (12). But still the UIL leaves it open for an athlete to return to play in the same day, if the athlete hasn’t lost consciousness and concussion symptoms are resolved within 15 minutes; and like its heat guidelines, concussion protocol is merely a set of recommendations and isn’t enforced. According to the Dallas News, fifty-three percent of public schools in Texas and about ninety-three percent of private schools don’t have a full-time certified trainer on staff, and thirty-three percent of public school and eighty-seven percent of private schools don’t have weekly access to a certified trainer (4).

### Conclusion

The awareness of concussions has started to make its way to the top, according to the Fort Worth Star-Telegram the UIL and state education commissioners are currently working on approving that “Texas public high school athletes who get a concussion wouldn’t return to play until the next day, at the earliest, and a licensed healthcare professional would have to approve any return to play (7).”

With the number of athletes in public and private schools in Texas, and all across the United States, why has the issue of concussions not been dealt with before now? For fear of losing playing time there are fewer occurrences reports, but the long-term effects need to be stressed to all student-athletes. Not only athletes, but coaches, athletic trainers and parents need to be informed of the side effects that can happen if a concussion is not reported. Making it mandatory to do testing through concussion-based programs, like ImPACT, could be the first step in raising awareness and helping to give the adequate amount of time to recovery for those athletes who are injured.

### Applications in Sports

Everyone involved in contact sports, including coaches, athletic trainers, athletes, and parents, needs to know the severity of concussions. Many studies have shown what can happen if athletes don’t receive the adequate amount of time to heal after receiving a concussion, but compared to professional athletes there is little that is being done at the high school level to help with these recovery periods. Parents want to make sure their child is being cared for, while coaches have guidelines to follow to make sure their athletes makes a complete recovery, so following the footsteps of professionals and updating concussions guidelines can help in making sure everyone is taking the appropriate steps when a high school athlete has received a concussion.

### Acknowledgements

I would like to thank Dr. Kayla Peak, the Director of the Graduate Program at Tarleton State University, for assisting in the development of this article.

### References

1. (2010). NFL issues stricter guidelines for returning to play following concussion. E-Journal of The Sports Digest. Retrieved from http://www.thesportdigest.com/

2. Center for the Study of Traumatic Encephalopathy, About CTE. (n.d.) What is CTE. Retrieved from http://www.bu.edu/cste/

3. Copeland, Jack. (2009). Safeguard committee acts on concussion-management measures. Retrieved from National Collegiate Athletic Association website: http://www.ncaa.org

4. George, Brandon. (2010, August 1). Hidden dangers: concussions in high school sports. The Dallas Morning News. Retrieved from http://www.dallasnews.com/sharedcontent/dws/spt/stories

5. George, Brandon. (2010, August 2). Texas’ UIL falls behind on concussion policy. The Dallas Morning News. Retrieved from http://www.dallasnews.com/sharedcontent/dws/spt/stories

6. ImPACT-Testing and Computerized Neurocognitive Assessment Tools, About ImPACT. (n.d.) Overview and Features of the ImPACT Test. Retrieved from http://impacttest.com/

7. McCrea, Michael, Hammeke, Thomas, Olsen, Gary, Leo, Peter, & Guskiewicz, Kevin. (2004).Unreported concussions in high school football players. The Clinical Journal of Sports Medicine, (14)1, 13-17. Retrieved from http://journals.lwwlcom/cjsportsmed

8. NCAA, Student-Athlete Experience, Student-Athlete Well-being, Concussions. (n.d.). 23 Sports Specific Poster. Retrieved from http://www.ncaa/org

9. NCAA, Student-Athlete Experience, Student-Athlete Well-being, Concussions. (n.d.). Fact Sheet for Coaches. Retrieved from http://www.ncaa.org

10. NCAA. Student-Athlete Experience, Student-Athlete Well-being, Concussions. (n.d.). Fact Sheet for Student-Athletes. Retrieved from http://www.ncaa.org

11. Schwarz, Alan. (2010, September 13). Suicide reveals signs of a disease seen in the N.F.L. The New York Times. Retrieved from http://nytimes.com

12. Tumulty, Brian. (2010, May 20). Study highlights frequency of concussions in high school athletes. Retrieved from http:// www.usatoday.com

### Corresponding Author
Lindsey Neumann
445 Oak Springs Drive
Seguin, Texas 78155

<lindseyneumann@hotmail.com> 830-305-4312

### Author Bio
Lindsey Neumann is a graduate student studying Kinesiology at Tarleton State University in Stephenville, Texas.

2015-11-06T20:22:56-06:00April 19th, 2011|Contemporary Sports Issues, Sports Coaching, Sports Management, Sports Studies and Sports Psychology|Comments Off on Raising Awareness of the Severity of Concussions

The Importance of Driving Distance and Driving Accuracy on the PGA and Champions Tours

### Abstract

The question of whether driving distance or driving accuracy is more important to a golfer’s overall level of performance is a question that has long been debated. No conclusive answer has been found despite the efforts of numerous researchers who have investigated the relative importance of these two shot-making measures along with other shot-making measures such as greens-in-regulation and putting average. There are various reasons why this particular question has gone unanswered for so many years and many of these reasons are methodological in nature. However, the results in this paper, using data from the 2006-2009 seasons of the PGA and Champions Tours and a new methodological approach, indicate that the relative importance of driving distance and driving accuracy depends upon both the type of hole (Par 4 hole versus Par 5 hole) and the age of the golfer. For younger PGA Tour members, driving accuracy was more important than driving distance on Par 4 holes, but the opposite was true on Par 5 holes. For older Champions’ Tour members, driving distance was more important than driving accuracy on both Par 4 and Par 5 holes. Additional analyses revealed that the quality of the drive, in terms of both its distance and accuracy, was relatively more important to a golfer’s performance on the Champions Tour than it was on the PGA Tour.

**Key Words:** Golf, Driving Distance, Driving Accuracy, importance, performance

### Introduction

Which is more important to a golfer’s success – how far they drive the ball or how accurate they are with their drive? Past attempts to answer this age-old question have been unsuccessful for a variety of reasons, including the utilization of flawed methodological procedures as well as the failure of researchers to consider that the relative importance of driving distance and driving accuracy might actually depend upon the combination of a number of different factors. The literature contains numerous studies that look at the extent to which driving distance and driving accuracy, along with other shot-making skills measures such as greens-in-regulation, putting average, and sand saves, were correlated to a golfer’s overall level of performance. Consistently, in these analyses, greens-in-regulation and putting average were found to be more highly correlated with scoring average and total earnings than either driving distance or driving accuracy (3,5,10). Further, in many instances, neither driving distance nor driving accuracy was statistically significant. These past analyses were typically based upon the performance of PGA Tour members, although the performances of members of other professional golf tours and amateur golfers have also been analyzed (2,6,7,8,11).

There are a number of methodological issues that need to be examined when attempting to evaluate the relative importance of driving distance and driving accuracy, especially when these two measures are considered in conjunction with other predictor measures. Failure to do so can result in faulty conclusions being made. In this paper, the distance versus accuracy question is examined by conducting separate analyses for members of the PGA Tour and the Champions Tour.

### Methods

#### Populations

The populations of interest in the study are members of the PGA Tour and the Champions’ Tour for the last four tour seasons, 2006-2009. The latter tour is for golfers who are at least 50 years of age. Data used for both tours in this analysis came from the PGA Tour website (www.pgatour.com).

#### Dependent Variables

scoring average has frequently been used as an overall performance measure in analyses that examined the effects of various shot-making skills. However, in the present study, which compares the relative importance of driving distance and driving accuracy, scoring average should not be used as the dependent variable measure. The reason for this is that scoring average is based on all 18 holes in a round, and golfers will typically use a driver only on Par 4 and Par 5 holes and not on Par 3 holes. The fact that there may be as many as five or six Par 3 holes in a round makes scoring average an inappropriate performance measure for the purpose of this study.

The total earnings of a professional golfer on a particular tour are another measure that has been used for the dependent variable. Like scoring average, total earnings have problems associated with its use in the present study. The first problem is that tournaments on the various professional golf tours do not offer the same amount of prize money. As a result, total earnings is more heavily weighted to how well a golfer performs in tournaments that have the largest purses than to how well a golfer performs in all of the tournaments in which they play. A second problem is that total earnings do not take into account the number of tournaments played in a season. Accordingly, low total earnings may be due either to poor performances or to a small number of tournaments having been played.

Due to the problems associated with both scoring average and total earnings, it was decided to use two different dependent variable measures for determining the relative importance of driving distance and driving accuracy. These two measures are (i) scoring average obtained only on Par 4 holes and (ii) scoring average obtained only on Par 5 holes. By having these two distinct measures, it is possible to determine whether the relative importance of driving distance and driving accuracy varies by type of hole. Further, the use of these two measures also eliminates the previously discussed problems associated with both scoring average based on 18 holes and with total earnings.

#### Independent Variables

Besides driving distance and driving accuracy, there are other variables or shot-making skills that have been commonly used in analyses that sought to determine the key factors that are related to a golfer’s overall performance. Three of the most frequently used measures will be used in this study. They are:

– **Greens-in-regulation:** The percentage of times that a golfer is able to land his or her ball on the green in two strokes on a Par 4 hole and in three or fewer strokes on a Par 5 hole.

– **Putting average:** The average number of putts per greens-in-regulation.

– **Sand saves:** The percentage of times a golfer takes two or fewer shots to put their ball in the hole from a greenside sand bunker.

Analysis
Descriptive statistics will be obtained and regression analyses were conducted in order to determine the relative importance of driving distance and driving accuracy. However, it should be noted that a potential problem exists when using highly correlated predictor variables in a regression analysis. This is the problem of multicollinearity and this problem is one that is often present in studies that seek to determine the relative importance of various shot-making skills. For example, Heiny (5) did not explicitly consider the effects of multicollinearity when he concluded, using data from the 1992-2003 PGA Tour seasons, that the two driving measures were of far less importance to a golfer’s overall level of performance than either greens-in-regulation or putting average. The problem of multicollinearity arose since driving distance and driving accuracy were both highly correlated with greens-in-regulation and because these three measures were all used in the regression model. Due to multicollinearity, the relative importance of the two driving measures could not be accurately determined. Since the focus of this study is on driving distance and driving accuracy, primary attention will be placed on these two measures.

### Results

#### Descriptive Statistics

Descriptive statistics for driving distance and driving accuracy for members of each tour during the 2006 to 2009 seasons are given in Table 1. The scoring average on both Par 4 and Par 5 holes for each of the tours remained fairly constant over this period of time. On the shorter Par 4 holes, the average score on both tours was virtually identical and slightly over par. On the Par 5 holes, the average score was under par on both tours, but Champions’ Tour golfers had a slightly higher stroke average compared to their PGA Tour counterparts.

**Table 1**
Means and Standard Deviations for Scoring Average, Average Driving Distance and Driving Accuracy Percentage for Golfers on the PGA and Champions Tours: 2006-2009

2006 2007 2008 2009
Tour/variable Mean SD Mean SD Mean SD Mean SD
PGA Tour
Scoring average on Par 4 holes 4.06 0.04 4.07 0.04 4.07 0.03 4.06 0.04
Scoring average on Par 5 holes 4.68 0.07 4.69 0.06 4.70 0.07 4.69 0.07
Average driving distance (yards) 289.5 8.7 289.1 8.6 287.6 8.6 288.1 8.6
Driving accuracy (%) 63.4 5.4 63.5 5.2 63.4 5.5 62.3 5.5
(n) (196) (196) (197) (202)
Champions Tour
Scoring average on Par 4 holes 4.06 0.06 4.05 0.06 4.05 0.07 4.06 0.07
Scoring average on Par 5 holes 4.73 0.10 4.71 0.09 4.73 0.08 4.72 0.11
Average driving distance (yards) 270.2 9.4 273.7 9.3 272.6 8.9 277.0 10.5
Driving accuracy (%) 71.4 5.0 69.2 5.3 69.1 5.8 68.6 5.4
(n) (80) (77) (75) (81)

During the four year period, the average driving distance on the PGA Tour was between 287.6 yards and 289.5 yards. The big jump in terms of average driving distance on the PGA Tour came between 1995 and 2003 when a spring-like effect in drivers was permitted. This development, together with a new a multi-layered ball, allowed golfers to launch balls higher and with less spin, thus creating optimum launch conditions and longer driving distances. This has resulted in the average driving distance leveling off in recent years on the PGA Tour. However, on the Champions’ Tour, the distance of the average drive increased from 270.2 yards in 2006 to 277.0 yards in 2009. This recent increase was due, in part, to a number of older tour members retiring and being replaced by longer-hitting younger golfers. In 2009, the differential between the PGA Tour and the Champions’ Tour in terms of the length of the average drive was just 11.1 yards compared to 19.3 yards in 2006.

The driving accuracy percentages were in a narrower range on the PGA Tour compared to the Champions’ Tour. In addition, the Champions’ Tour accuracy percentages exhibited a steady decline over the four year period and, on each tour, the percentage was at its lowest level in 2009. In terms of the variability of the two scoring averages as measured by the standard deviation, there was considerably more variability in the average scores on both the Par 4 and Par 5 holes for members of the Champions’ Tour than for members of the PGA Tour. The variability was also greater on the Champions’ Tour with respect to both driving distance and driving accuracy, but the variability differentials were not as large as they were for the two scoring average measures.

A moderately strong negative correlation existed between Driving Distance and Driving Accuracy for golfers on both tours during the 2006-2009 seasons. These correlations, which were all significant at the .01 level, are given in Table 2. The nature of the relationship found in this study was similar to that obtained by Wiseman et al (12) for members of the PGA Tour during the 1990-2004 seasons. The results also indicate that during the last two years, there was a weakening of the relationship for members of the Champions’ Tour.

**Table 2**
Correlation between Driving Distance and Driving Accuracy on the PGA and Champions Tours: 2006-2009

Tour 2006 2007 2008 2009
PGA Tour -.59 -.64* -.61* -.57*
Champions Tour -.53 -.52 -.47 -.37

∗ Correlation is significantly different from zero (p < .01) in that year.

For each tour, a golfer’s average driving distance and driving accuracy percentages were correlated with their scoring average on Par 4 and Par 5 holes. The obtained correlations are presented in Table 3. Most signs are negative, as expected, since long drives and a high driving accuracy percentage are associated with good performance and low scores. However, there were distinct differences in the correlations depending upon the tour and the type of hole.

**Table 3**
Driving Distance and Driving Accuracy Correlations with Scoring Average on Par 4 and Par 5 Holes for the PGA and Champions Tours: 2006-2009

2006 2007 2008 2009
Tour/type of hole Distance Accuracy Distance Accuracy Distance Accuracy Distance Accuracy
PGA
Par 4 -.06 -.36* -.00 -.32* -.06 -.33* -.12 -.37*
Par 5 -.37* .03 -.36* -.17** -.39* .14 -.43 .12
Champions
Par 4 -.49* -.14 -.49* -.12 -.38* -.29* -.40* -.30*
Par 5 -.62* .13 -.60* .02 -.46* .01 -.54* -.08

∗ Correlation was significantly different from zero for that year (p < .01).
∗∗ Correlation was significantly different from zero for that year (p < .05).

On Par 4 holes, the correlation between driving distance and scoring average for golfers on the Champions’ Tour was much stronger than for golfers on the PGA Tour. These correlations were between r = -.38 and r = -.49 for Champions’ Tour members, but only between r = -.00 and r = -.12 for PGA Tour members. These latter correlations indicated that there was virtually no relationship between driving distance and scoring average on Par 4 holes for PGA Tour golfers. The opposite was true for driving accuracy. The correlation between driving accuracy and scoring average on the PGA Tour was stronger than on the Champions’ Tour. Correlations for driving accuracy and scoring average were between r = -.32 and r = -.37 for golfers on the PGA Tour and between r = -.12 and r = -.30 for golfers on the Champions’ Tour. In the last two years, the relationship between driving accuracy and scoring average on the Champions’ Tour has strengthened. The above results suggest that on Par 4 holes, driving distance was far more important than driving accuracy for Champions’ Tour golfers, while driving accuracy was far more important than driving distance for PGA Tour golfers.

With Par 5 holes, driving distance was more highly correlated with scoring average than was driving accuracy on both tours. The correlations were stronger, however, on the Champions’ Tour and were between r = -.46 and r = -.62. On the PGA Tour, the correlations were between r = -.36 and r = -.43. For driving accuracy, the correlations were weak on both tours. These results suggest that on Par 5 holes, driving distance was more important than driving accuracy for players on both the PGA Tour and the Champions’ Tour.

#### Regression Analyses

Regression analyses were conducted to determine the extent to which driving distance and driving accuracy taken together could explain the variability in scoring average on Par 4 and Par 5 holes. A large R2 value would indicate the drive was a key factor in terms of explaining overall performance, while a small R2 value would indicate the opposite. Results are shown in Table 4.

**Table 4**
Estimated Linear Regression Equation Coefficients and R2 Values when Driving Distance and Driving Accuracy were used to Predict Scoring Average

Tour / type of hole / year Estimated Linear Regression Coefficients
b0 Constant b1 Driving distance b2 Driving accuracy R2
PGA
Par 4
2009 5.076 -.0024* -.0050* .30
2008 4.469 -.0008** -.0026* .14
2007 4.716 -.0014* -.0037* .18
2006 4.826 -.0017* -.0041* .24
Par 5
2009 6.176 -.0046* -.0026** .21
2008 5.932 -.0038* -.0019 .16
2007 5.665 -.0031* -.0011 .14
2006 6.081 -.0041* -.0035* .19
Champions
Par 4
2009 5.710 -.0042* -.0071* .39
2008 5.904 -.0050* -.0071 .42
2007 5.925 -.0052* -.0063* .44
2006 5.998 -.0053* -.0070* .45
Par 5
2009 7.172 -.0072* -.0068* .38
2008 6.493 -.0055* -.0038** .26
2007 7.372 -.0080* -.0069* .47
2006 7.238 -0.0080* -.0054** .44

∗ Estimated regression coefficient is significantly different from zero (p < .01).
∗∗ Estimated regression coefficient is significantly different from zero (p < .05).

On the Champions’ Tour, the value of R2 ranged between .38 and .47 during the 2006-2009 seasons for each type of hole, except for Par 5 holes in 2008 when R2 = .26. The regression coefficients for driving distance and driving average were all significant at the .01 level, except in 2006 and 2008 when the coefficient associated with driving accuracy was significant at the .05 level. Results on the PGA Tour differed as far less of the variability in scoring average could be explained by the drive alone. R2 values ranged between .14 and .24 in the four years and on each type of hole, except on Par 4 holes in 2009 when R2 = .30. The regression coefficient for driving distance was significant at the .01 level in each year, except in 2008 where the significance level was .05. The regression coefficient for driving accuracy on Par 4 holes was significant at the .01 level in each year, but on Par 5 holes, there were two years in which the coefficient was not statistically significant.

Additional regression analyses were conducted to determine the extent to which three other variables (greens-in-regulation, putting average and sand saves) could explain the variability in scoring average that could not be explained by either driving distance or driving accuracy. The R2 values presented in Table 5 indicate that the five measures used together could explain more of the variability in scoring average on the Champions’ Tour than on the PGA Tour. R2 values ranged from .69 to .89 on the Champions’ Tour and from .41 to .75 on the PGA Tour.

**Table 5**
R2 values when Five Skills Measures were used to Predict Scoring Average on Par 4 and Par 5 Holes for the PGA and Champions Tours: 2006-2009*

Tour / type of hole 2006 2007 2008 2009
PGA
Par 4
Par 5
Champions
Par 4
Par 5

∗ The five measures were Driving Distance, Driving Accuracy, Greens-in-Regulation, Putting Average, and Sand Saves.

**Table 6**
Proportion of Total Explained Variability in Scoring Average Directly Attributable to Driving Distance and Driving Accuracy on Par 4 and Par 5 Holes for the PGA and Champions Tours: 2006-2009

Tour / type of hole 2006 2007 2008 2009
PGA
Par 4 (.24/.67) = .36* (.18/.66) = .27 (.14/.61) = .23 (.30/.75) = .40
Par 5 (.19/.53) = .36 (.14/.41) = .32 (.16/.48) = .33 (.21/.56) = .38
Champions
Par 4 (.45/.88) = .51 (.44/.88) = .50 (.42/.89) = .47 (.39/.78) = .50
Par 5 (.44/.78) = .56 (.47/.80) = .59 (.26/.69) = .38 (.38/.78) = .51

∗ Values obtained by dividing R2 values given in Table 4 by the corresponding R2 values given in Table 5.

The ratios of the corresponding R2 values in Tables 4 and 5 are given in Table 6. These ratios indicate the relative importance of the drive compared to the other three predictor measures. The higher the ratio, the greater the variability in scoring average that could be explained by using the two driving measures compared to the three other predictor measures. As shown in the table, the ratios are higher in each case for the Champions’ Tour than for the PGA Tour. This indicates that the drive, compared to the other three measures that were used, was relatively more important for golfers on the Champions’ Tour than for golfers on the PGA Tour.

### Discussion

This study examined the relative importance of driving distance and driving accuracy on two professional golf tours from 2006-2009. Based upon independent analyses on Par 4 and Par 5 holes for each tour, the findings indicated that the relative importance of driving distance and driving accuracy varied by both tour and type of hole.
Other researchers have recently investigated the physical (1,9) and mental (4) effects of aging on the ability of professional golfers to compete at a high level. These studies described the nature of declines that take place with aging as well as compensating offsets, for example, shorter, but more accurate drives. In the present study, one possible explanation for the changing relative importance of driving distance relates to the physical changes that occur as people age. Individuals lose strength and agility over time, which in golf is frequently demonstrated by both shorter and more accurate drives. However, for Champions’ Tour golfers this improvement in driving accuracy is not enough to offset the loss in driving distance which, in turn, results in higher scoring averages. On long Par 4 holes, a short drive for these players means fewer birdie opportunities because it is more difficult to reach the green in regulation. For PGA Tour golfers, a relatively short drive on a lengthy Par 4 hole is not necessarily an impediment to reaching the green in regulation, even if the tee shot does not come to rest on the fairway.
This study also demonstrated that the drive was relatively more important to a golfer’s overall performance than was previously thought based upon a number of similar studies. This increased level of relative importance could be attributed, in part, to the fact that in the present analysis, separate scoring averages on Par 4 and Par 5 holes were used rather than a single scoring average based upon all 18 holes. Additionally, by conducting the analysis in two phases, it was shown that approximately half of the total explained variability in scoring average on both Par 4 and Par 5 holes on the Champions’ Tour, and approximately one-third of the total explained variability in scoring average on the PGA Tour, could be directly attributed to the drive alone. These results highlight the need for careful attention to the performance measures that are used in future studies.

### Conclusion

This paper investigated whether driving distance or driving accuracy was more important to a golfer’s performance. The results indicated that the answer to the question depended not only on the type of hole (Par 4 or Par 5), but also on the age of the golfer. For the 50 years of age and over golfer playing on the Champions’ Tour, driving distance was clearly a more important factor regardless of the type of hole. However, for the under 50 years of age golfer on the PGA Tour, driving accuracy was more important on Par 4 holes, while driving distance was more important on Par 5 holes. In addition, the investigation revealed that the quality of the drive in terms of the combined effects of both driving distance and driving accuracy was more important to a golfer’s success on the Champions’ Tour than it was on the PGA Tour.

### Applications in Sport

This study is relevant to all golf teaching professionals because instructors debate the amount of time golfers should spend in practicing their driving techniques. Traditionally, golfers have been told to spend less time on driving and more on other facets of the game. This study has shown that except for young professional golfers, the drive is very important in trying to achieve lower scores.

### References

1. Baker, J., Deakin, J., Horton, S. and Pearce, W. (2007). Maintenance of Skilled Performance with Age: A Descriptive Examination of Professional Golfers. Journal of Aging and Physical Ability, 15, 299-316.

2. Callan, S. & Thomas, J. (2006). Performance in Amateur Golf: An Examination of NCAA Division I Golfers. The Sport Journal, 9, 3. Available online at: <http://www.thesportjournal.org/article/gender-skill-and-performance-amateur-golf-examination-ncaa-division-i-golfers/>.

3. Engelhardt, G.M. (1995). It’s not how you drive, it’s how you arrive: the myth. Perceptual and Motor Skills, 80, 1135-1138.

4. Fried, Harold O. & Loren W. Tauer. (2009). The impact of age on the ability to perform under pressure: golfers on the PGA tour. Journal of Productivity Analysis. Available online at: <http://www.springerlink.com/content/337g8rv212w45423/?p=7d7abc1e32d744f3906e83014cf31f51&pi=4>.

5. Heiny, E. (2008). Today’s PGA Tour Pro: Long but Not so Straight. Chance, 21, 1, 10-21.

6. Moy, R.L. & Liaw, T. (1998). Determinants of golf tournament earnings. The American Economist, 42, 65-70.

7. Rishe, P. (2001). Differing Rates of Return to Performance. Journal of Sports Economics, 2, 285-296.

8. Shmanske, S. (2000). Gender, Skill and Earnings in Professional Golf. Journal of Sports Economics, 1, 385-200.

9. Tirunch, G. (2010). Age and Winning Professional Golf Tournaments. Journal of Quantitative Analysis in Sports, 6, 1. Available online at: <http://www.bepress.com/jqas/vol6/iss1/5/>.

10. Wiseman, F. & Chatterjee, S. (2006). Comprehensive Analysis of Golf Performance on the PGA Tour: 1990-2004. Perceptual and Motor Skills, 102, 109-117.

11. Wiseman, F., Chatterjee, S., Wiseman, D., & Chatterjee, N. (1994). An Analysis of 1992 Performance Statistics for Players on the US PGA Tour, Senior PGA and LPGA Tours, in A. Cochran & M.R. Farrally (Eds.) Science and Golf II. Proceedings of the World Scientific Congress of Golf. London: E & FN Spon. Pp. 199-204.

12. Wiseman, F., Habibullah, M., & Yilmaz, M. (2007). A New Method for Ranking Total Driving Performance on the PGA Tour. The Sport Journal, 10, 1. Available online at: <http://www.thesportjournal.org/article/new-method-ranking-total-driving-performance-pga-tour>.

### Corresponding Author

**Frederick Wiseman, Ph.D**
202 Hayden Hall
College of Business Administration
Northeastern University
Boston, MA 02115
<f.wiseman@neu.edu>
(617) 373-4562

### Author Bios

#### Frederick Wiseman
Frederick Wiseman is Professor of Statistics at the Northeastern University College of Business Administration

#### Mohamed Habibullah
Mohamed Habibullah is a Lecturer in Statistics at the Northeastern University College of Business Administration

#### John Friar
John Friar is Executive Professor in Entrepreneurship and Innovation at the Northeastern University College of Business Administration

2013-11-25T16:31:21-06:00March 16th, 2011|Contemporary Sports Issues, Sports Exercise Science, Sports Management, Sports Studies and Sports Psychology|Comments Off on The Importance of Driving Distance and Driving Accuracy on the PGA and Champions Tours
Go to Top