ERIC Identifier: ED383278
Publication Date: 1995-06-00
Author: Gaither, Gerald - And Others
Clearinghouse on Higher Education Washington DC.| BBB32577 _ George Washington
Univ. Washington DC. Graduate School of Education and Human Development.
Measuring Up: The Promises and Pitfalls of Performance
Indicators in Higher Education. ERIC Digest.
"When Adam walked with Eve in the Garden of Eden, he was overheard to say
(presumably by the angel just arrived with the flaming sword), 'You must
understand, my dear, that we are living through a period of transition' (Gray
1951, p. 213)."
"Measuring Up: The Promises and Pitfalls of Performance Indicators in Higher
Education" is also about transitions and issues unfolding during the
implementation of performance indicators. Beginning in the 1980s, the era
concerned primarily with growth in enrollments and access was largely over,
while another waited further definition and recognition in such emerging issues
as public accountability, quality, productivity, and undergraduate education.
The 1980s was also distinguished by the growth of the movement toward
assessment and accountability. While higher education in the United States was
affected by several phenomena during this decade, surely none created more
fundamental change than the movement toward assessment. A 1990 study by the
Education Commission of the States, for example, revealed that 40 states
actively promoted assessment. Along with this movement was a rising interest in
the quality of undergraduate education, and a litany of studies published in the
1980s lamented the poor condition of undergraduate education, pointing to
inadequacies that needed to be corrected. By 1986, all 50 states and the
District of Columbia had developed initiatives to improve undergraduate
Accompanying this movement was a subtle shift from growth in funding,
principally through formula funding, toward funding "outcomes," "results," and
"performance." This focus on performance, using funding incentives as
motivators, helped encourage policy makers and the academic community to explore
the use of a system of indicators to raise warning signs about the efficiency
and effectiveness of higher education.
These domestic efforts paralleled developments in higher education in a
number of countries, particularly in Europe and Australia. Since the late 1970s,
the concepts of performance indicators and quality assessment have clearly
become international issues (Kells 1993). Indeed, they are becoming an integral
part of an emerging international method on how to manage higher education, with
indicators serving as signals or guides for making national or international
comparisons in educational quality, effectiveness, and efficiency. Further, the
main advantage of such performance indicator systems is their usefulness as
points of reference for comparing quality or performance against peers over
time, or achievement against a desired objective.
The 1990s emerged as part of another era awaiting further definition. First,
the development of performance indicators in the 1990s differs from that in the
1980s. Policy makers are generally less inclined toward the voluntary
institutional improvement of the 1980s and more focused on a system of mandated
public accountability. And by 1994 some 18 states had developed indicator
systems, most of them in the first three years of the decade. A heightened tempo
in the use of performance indicators, accompanied by a tendency to copy other
states' systems, resulted in a common core of state indicators to address common
problems. Concomitant with this movement was greater centralization of
authority, with the intent of bringing about more public accountability and
better management--which will likely underlie much future funding of higher
education in the United States.
The air is full of questions. Will the federal government assume greater
centralized control of higher education through such areas as accreditation and
financial aid and by using a set of national goals and performance standards?
Will international education continue to be reformed through the mechanisms of
performance indicators and incentive funding? How should such mechanisms be best
used to motivate and bring about desired reforms on campus and at state,
regional, or national levels? While scholars and legislators debate these
questions, the public's investment in and concern about quality and performance
in higher education continue unabated, and institutional resistance to
fundamental reform remains ingrained. It remains unclear whether performance
indicators and incentive funding will result in any widespread, lasting
innovations or the concept will pass quickly through higher education in this
country, leaving only a modest residue.
Perhaps, however, a hint about any lasting contribution and the future role
for performance indicators can be found in Europe, where early pioneering
efforts on quality assessment are maturing. Nationally, the role of performance
indicators is declining, and growing doubts about the ability to "measure the
unmeasurable," particularly about the validity of such measures to evaluate and
be used to reward quality, have led to retrenchment in such countries as the
Netherlands and the United Kingdom. At the same time, national and institutional
experiments with such assessment techniques as peer reviews and quality audits
are gaining prominence, relegating performance indicators to the role of
supporting tools in such efforts.
This emerging approach offers the collective faculty a more palatable, more
dynamic vision of academic quality, ostensibly more worthy of their commitment
and pursuit than any externally imposed system of performance indicators.
Faculty resolutely insist they know academic quality when they see it and should
retain the primary responsibility for assessing and rewarding it. But such
autonomy is always purchased by providing measures of accountability for results
and resources to the public and to policy makers. It remains to be seen whether
faculty will assume the collective mantle of responsibility and professional
obligation to develop processes that develop a sense of common purpose and
shared accountability with the various publics. If this pattern gains
prominence, performance indicators will likely be relegated to a minor role as a
supporting tool; if the academy does not respond, the public appetite for
results will expand and crystallize around the use of external performance
indicators to measure desired results. And the jury is still out on the results
Borden, Victor M.H., and Trudy Banta,
eds. 1994. Using Performance Indicators to Guide Strategic Decision Making. New
Directions for Institutional Research No. 82. San Francisco: Jossey-Bass.
Cave, M., S. Hanney, and M. Kogan. 1991. The Use of Performance Indicators in
Higher Education: A Critical Analysis of Developing Practice. 2d ed. London:
Jessica Kingsley Publishers.
Gray, James. 1951. The University of Minnesota at Minneapolis, 1851-1951.
Minneapolis: Univ. of Minnesota Press.
Kells, H.R. 1992. Performance Indicators for Higher Education: A Critical
Review with Policy Recommendations. PHREE Background Paper Series No.
PHREE/92/56. World Bank.
---, ed. 1993. The Development of Performance Indicators for Higher
Education: A Compendium for Eleven Countries. 2d ed. Paris: Organization for
Economic Cooperation and Development. ED 331 355. 134 pp.
Ruppert, Sandra S., ed. 1994. Charting Higher Education Accountability: A
Sourcebook on State-Level Performance Indicators. Denver: Education Commission
of the States. ED 375 789. 177 pp.
This ERIC digest is based on a new full-length report in the ASHE-ERIC Higher
Education Report series, prepared by the ERIC Clearinghouse on Higher Education
in cooperation with the Association for the Study of Higher Education, and
published by the School of Education at the George Washington University.