Site Links


Home

 


Search for ERIC Digests

 


About This Site and Copyright

 


Privacy Policy

 
 
Resources for Library Instruction

 


Information Literacy Blog

ERIC Identifier: ED284522
Publication Date: 1986-00-00
Author: Conrad, Clifton F. - Wilson, Richard W.
Source: Association for the Study of Higher Education.| ERIC Clearinghouse on Higher Education Washington DC.

Academic Program Reviews. ERIC Digest.

Within the last few years, the role of academic program review has emerged as one of the most salient issues in American higher education. Nestled within a context of accountability, program reviews have become a dominant and controversial activity. Although the institutional authority varies greatly, higher education agencies in all 50 states now conduct state-level reviews; 28 of those agencies have authority to discontinue programs. Moreover, a majority of the multicampus systems have introduced program reviews, and over three-fourths of the nation's colleges and universities employ some type of program review. The heightened interest in program review can be traced to a widespread interest in improving program quality and the need to respond creatively to severe financial constraints and to external constituencies' expectations for accountability.

The literature contains a generous amount of controversy regarding the purposes, processes, and outcomes of program review. The intent of this digest is to illuminate this terrain: to capture the diverse institutional approaches to review, to examine the central issues, and to reflect on ways in which program review might be improved. Toward that end, the report is based on a review of the literature and an analysis of program review practices at 30 representative institutions.

WHAT DISTINGUISHES CURRENT PROGRAM REVIEWS?

Colleges and universities have a longstanding tradition of program evaluation, a tradition that can be traced from colonial and antebellum colleges to modern American universities. Until well into this century, program reviews were viewed largely as internal matters, initiated most often to reform and revitalize the curriculum. The idea that program reviews should be conducted to demonstrate accountability to external constituencies is a phenomenon of the twentieth century. The gradual development of regional and professional accrediting associations and the creation of statewide governing and coordinating boards are at least partly the result of a belief that programs must be responsive to the needs and expectations of external as well as internal audiences.

Especially in the last several years, program reviews have been designed to achieve another major objective: aiding those making decisions about the reallocation of resources and program discontinuance. Thus, a broad range of expectations now exists for program review in higher education. Program improvement, accountability to external constituencies, and resource reallocation are the purposes cited most often. Despite this growth in expectations, there is little evidence to suggest that an evaluation system can be designed to address multiple purposes simultaneously. It is especially difficult to pursue both program improvement and resource reallocation at the same time, and an institution's interests are served best if reviews focused on program improvement are conducted separately from those concerned with reallocating resources.

WHAT DO FORMAL EVALUATION MODELS CONTRIBUTE?

Program reviews at most institutions draw heavily on one or more of several models: goal-based, responsive, decision-making, or connoisseurship. Although these models are seldom explicitly identified in descriptions of institutional review processes, they can be inferred from the procedures used.

The goal-based model has had the most influence, offering the advantages of systematic attention to how a program has performed in relation to what was intended and concern for the factors contributing to success or failure. The characteristic of the responsive model that has influenced program reviews in higher education is the attention given to program activities and effects, regardless of what program goals might be. The central concern of an evaluation, according to a proponent of responsive evaluation, ought to be the issues and concerns of those who have an interest in the program, not how a program has performed relative to its formal goal statements.

The major contribution of the decision-making model to program review in higher education is the explicit attempt to link evaluations with decision making, thus focusing the evaluation and increasing the likelihood that results will be used. The connoisseurship model of evaluation has a long tradition in higher education. It relies heavily on the perspectives and judgments of experts, which are valued because of the individual's assumed superior knowledge and expertise and a commonly shared value system.

HOW SHOULD QUALITY BE ASSESSED?

The assessment of quality has generated more confusion and debate than any other issue for those engaged in program review. Pressure to define what quality means and what types of information should be collected has always existed, but interest has been heightened by the relatively recent emphasis on program review for resource reallocation and retrenchment.

Four different perspectives have been offered on how quality should be defined: the reputational view, the resources view, the outcomes view, and the value-added view. The reputational view assumes that quality cannot be measured directly and is best inferred through the judgments of experts in the field. The resources view emphasizes the human, financial, and physical assets available to a program. It assumes that high quality exists when resources like excellent students, productive and highly qualified faculty, and modern facilities and equipment are prevalent.

The outcomes view of quality draws attention from resources to the quality of the product. Faculty publications, students' accomplishments following graduation, and employers' satisfaction with program graduates, for example, are indicators used. The problem with the outcomes view is that the program's contribution to the success of graduates, for example, is not isolated. It is assumed that if the graduate is a success, the program is a success.

The value-added view directs attention to what the institution has contributed to a student's education (Astin, 1980). The focus of the value-added view is on what a student has learned while enrolled. In turn, programs are judged on how much they add to a student's knowledge and personal development. The difficulty with this view of quality is how to isolate that contribution.

Most institutions assess quality by adopting aspects of all four views. The assumption is that quality has multiple dimensions and, in turn, that multiple indicators should be used for its assessment. A large number of quantitative and qualitative indicators have been suggested for making such assessments.

DO PROGRAM REVIEWS MAKE A DIFFERENCE?

Perhaps the most significant issue relating to program review is the effect of the considerable activity at all levels of higher education. The assessment of impact requires that attention be given to the longer-term effects of decisions that are made, that is, whether a program is stronger, more efficient, or of higher quality. The major criterion to use in assessing impact is whether an evaluation makes a system function better.

Only a few studies have analyzed impact systematically. The University of California and the University of Iowa benefited from program reviews, including providing a stimulus for change and improving knowledge among decision makers about programs. Not all analyses of impact are as positive, however. A small number of studies have focused on cost savings and have found that little money is saved--that, in fact, reviews frequently require an increased commitment. Program reviews can have negative effects--unwarranted anxiety, diversion of time from teaching and research, and unfulfilled promises and expectations.

The continued existence and growth of program review processes suggest that such efforts are supported and that the results can be beneficial. Given the plethora of program reviews at all levels of higher education, the need to study the effects of such reviews more systematically is urgent.

FOR MORE INFORMATION

Astin, Alexander W. "When Does a College Deserve to Be Called 'High Quality'?" In IMPROVING TEACHING AND INSTITUTIONAL QUALITY, Current Issues in Higher Education 1. Washington, DC: American Association for Higher Education, 1980. ED 194 004.

Barak, Robert J. PROGRAM REVIEW IN HIGHER EDUCATION: WITHIN AND WITHOUT. Boulder, CO: National Center for Higher Education Management Systems, 1982. ED 246 829.

Clark, Mary Jo, Rodney T. Hartnett, and Leonard L. Baird. ASSESSING DIMENSIONS OF QUALITY IN DOCTORAL EDUCATION: A TECHNICAL REPORT OF A NATIONAL STUDY OF THREE FIELDS. Princeton, NJ: Educational Testing Service, 1976. ED 173 144.

Conrad, Clifton F. and Robert T. Blackburn. "Research on Program Quality: A Review and Critique of the Literature." In HIGHER EDUCATION: HANDBOOK OF THEORY AND RESEARCH, vol. 1, edited by John C. Smart. New York: Agathon, 1985.

Cronbach, Lee J. "Remarks to the New Society." EVALUATION RESEARCH SOCIETY NEWSLETTER 1 (1977): 1-3.

Gardner, Don E. "Five Evaluation Frameworks: Implications for Decision Making in Higher Education." JOURNAL OF HIGHER EDUCATION 8 (1977): 571-593.

George, Melvin D. "Assessing Program Quality." In DESIGNING ACADEMIC PROGRAM REVIEWS, edited by Richard F. Wilson. New Directions for Higher Education No. 37. San Francisco: Jossey-Bass, 1982.

Seeley, John. "Program Review and Evaluation." In EVALUATION OF MANAGEMENT AND PLANNING SYSTEMS, edited by Nick L. Poulton. New Directions for Institutional Research No. 31. San Francisco: Jossey-Bass, 1981.

Skubal, Jacqueline M. "State-Level Review of Existing Academic Programs: Have Resources Been Saved?" RESEARCH IN HIGHER EDUCATION 2 (1979): 223-232.

Smith, S. "Program Review: How Much Can We Expect?" Unpublished report. Berkeley, CA: University of California-Berkeley, 1979.

Wilson, Richard F., ed. DESIGNING ACADEMIC PROGRAM REVIEWS. New Directions for Higher Education No. 37. San Francisco: Jossey-Bass, 1982.


 
 
 
 
 
 
 
 
 
 

Library Reference Search
 

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

 
| privacy