Skip Navigation
College Drinking Prevention - Changing the Culture

Stats & Summaries NIAAA College Materials Supporting Research Other Alcohol Information NewSpecial Features
College Presidents College Parents College Students H.S. Administrators H.S. Parents & Students
Supporting Research

Journal of Studies on Alcohol

College Drinking Statistical Papers

Funding

Related Research

 
Helpful Tools

In the News

Links

Link to Us

E-mail this Page

Print this Page


The Role Of Evaluation In Prevention Of College Student Drinking Problems

II. Why Is It So Difficult?

There is a certain mystique surrounding evaluation deriving, perhaps, from its use of technical analytic tools and statistical models to estimate program effects. In truth, however, the greatest barriers to successful evaluation are more mundane, if often subtle.

A. Poorly defined goals and objectives

Some may assume that when the problem is “obvious” (e.g., “student drinking”), the prevention goals and objectives are likewise obvious. Of course, in practice, things are not so easy. There is a world of difference between an intervention designed to eliminate college student drinking and one designed to limit excessive consumption of alcohol. These differences are often obscured, however, especially when pre-packaged programs are adopted, and the original designer’s intent is possibly unknown.

An assessment of an intervention’s effect, though, requires matching the outcome measures with the objectives of the program. Where the objectives are vague or contradictory, the evaluation is bound to be of little use. While it is possible in such situations to measure some dimensions of alcohol use and problems (e.g., prevalence of “binge drinking” or of hangovers), the evaluation is unlikely to inform program personnel of how best to improve the program’s effectiveness.

B. Poorly articulated program

Even where the intervention’s goals and objectives are clear, there may not be a well-described mechanism for achieving those goals. A well-designed program or intervention should be able to articulate the processes that lead from the program activities to the desired end point. Not only does this let program staff be informed of what is central versus peripheral to the intervention, it informs the evaluation plan as well. To the extent that the evaluation design permits one to follow each link in the hypothesized chain of events from activity to outcome, it will be possible for the evaluation to inform the program of where the hypotheses break down.

C. Fear and anxiety

For some program administrators and front-line personnel, evaluation poses a threat. This is especially likely in a situation where budgets are under close scrutiny or there have been well-publicized problems on campus that the programs are supposed to prevent. Such anxiety is understandable and needs to be addressed head on. The field of evaluation is fraught with examples of it being used as a means (or excuse) for justifying unpopular actions. Likewise, evaluators have too often taken on the role of “hired gun,” working at a distance from program personnel with little appreciation for the program’s goals or means of achieving them. Finally, if all an evaluation can provide is a simple answer to whether the program has had an impact or not, it will not have served the program well, as it would not have been able to suggest any ways in which the intervention could be improved upon. As discussed above, a good evaluation should be able to trace the sequence of events or effects that lead from the intervention to the desired outcomes, and identify where the hypothesized linkages begin to break down (or where they can be strengthened). Seen this way, so-called “negative results” would take on less pejorative connotations, and instead contribute systematically to a communal body of experience that can serve all college campuses in their efforts to have effective programs.

To the extent that evaluation activities can be built into program management in an atmosphere of upper-level management support, fear and anxiety are likely to be minimal. At the risk of overstating the case, one might say that one-shot evaluations conducted by distant or disconnected researchers are less likely to produce enthusiasm among program personnel or useful guidance to prevention strategy.

D. Insufficient data

Too many people assume that evaluation data and student survey data are one in the same. As a result, interest in evaluation is crushed when there is insufficient funding to mount a survey or, perhaps worse, a poorly administered and under-funded student survey is conducted in the hopes that it will provide useful data. Sometimes, the best that can be hoped for is a one or two-shot survey that is done competently but affords only a snapshot of student drinking at the time the survey was conducted.

To the extent that a prevention intervention is well-defined, the student survey may well provide useful information. If the objective is to reduce the prevalence of “binge” drinking, a survey can include the items to measure the prevalence. But one of the greatest limitations of survey data is that many of the serious negative consequences of drinking are too rare at the individual level to be caught in a typical survey. Yet for a sizable university, these rare events will occur and produce considerable cause for concern.

Ideally, then, a college or university would have data collected at the time those events occurred. As an example, it would be of great value to have a record of each instance in which a student were taken for urgent or emergency care, and specifically whether alcohol (or other drugs) were involved. The same kind of data could be collected on occasions when campus police come into contact with students. If a direct reading by breathalyzer or similar means is not feasible, then officers can make a judgment of alcohol involvement as is currently practiced in non-fatal vehicle crashes.

In some cases the problem should probably be reframed as “unobtainable data,” as it is likely that useful information of some kind exists in any of several sub-units of a college or university, but may be kept in hard-copy records or buried along with other information and thus practically unavailable for evaluation or monitoring purposes. While the situation is improving as more offices move towards automating their record entry and data management, it is still important to make sure that alcohol involvement is recorded routinely and available for aggregate reports.

E. Difficult to rule out other influences

The primary challenge for most evaluators is to minimize the possibility that any observed changes are due to some influence other than the intervention itself. As a simple example, having a comparison campus (where an intervention was not implemented) will guard against the possibility that a change in the prevalence of binge drinking is due to a general trend among students rather than a specific program. Of course, things are never that simple, and a well-trained and experienced evaluator can easily list many threats to the validity of a hypothetical evaluation design (cf. Campbell and Stanley, 1966, for the classic summary of such threats). For such a specialist, the challenge in designing an evaluation is to strike an optimum plan given the priorities and resources of the “client” on the one hand, and the anticipated confounding influences that may arise on the other.

 

Previous | Next

 

Last reviewed: 9/23/2005


Home
About Us
Awards
Site Map
FAQ
Accessibility
Plug-Ins
Privacy Policy
Contact Us
Web site Policies
Disclaimer

NIAAA logo HHS logo USA dot gov logo