ERIC Identifier: ED468727
Publication Date: 2002-00-00
Author: Gates, Susan M. - Augustine, Catherine H. - Benjamin, Roger -
Bikson, Tora K. - Kaganoff, Tessa - Levy, Dina G. - Moini, Joy S. - Zimmer, Ron
Source: ERIC Clearinghouse on Higher Education Washington DC.
Ensuring Quality and Productivity in Higher Education: An
Analysis of Assessment Practices. ERIC Digest.
Those responsible for education and professional development within systems
such as corporations, state governments, and government agencies are concerned
about the quality of those opportunities. As a result, they increasingly assign
responsibility for ensuring the quality and productivity of education within the
system to one particular office or agency. Often, such agencies receive little
guidance about how to approach their task. A RAND research team conducted a
broad review of the general literature on the assessment of quality and
productivity in education and professional development. The team also reviewed
the documentation of organizations engaged in such assessment, interviewed
experts, attended conferences, and conducted site visits to exemplary
organizations. The ASHE-ERIC Report Vol. 29, No.1, Ensuring Quality and
Productivity in Higher Education: An Analysis of Assessment Practices,
synthesizes the Rand study findings and provides suggestions for approaches that
might be useful for agencies given the task of ensuring the quality and
productivity of education and professional development activities in a specific
system. This ERIC Digest is based upon ASHE-ERIC Report Vol. 29, No.1, and
briefly summarizes the Report highlights.
WHY IS SYSTEM-LEVEL ASSESSMENT NEEDED?
Although the main
task of assessment focuses on the quality and productivity of specific providers
of education and professional development, the study found that a higher-level
assessment of the system as a whole is also crucial. Such an assessment has two
main purposes: (1) to determine whether the stakeholder and system-level needs
are being addressed, and (2) to identify opportunities to improve efficiency in
existing programs. In the first case, system-level assessment compares the needs
of the population served with the programs offered in the system. In a corporate
setting, for example, such an assessment might find that certain corporate-level
goals are not being addressed by education and training programs run by
individual business units. In higher education, a system-level assessment might
find that certain geographical regions are not being well served by existing
institutions in a state.
To achieve the second aim, the assessment examines whether the system's
resources are being allocated efficiently. A number of organizations are
improving their productivity through this process. For example, the Texas Higher
Education Coordinating Board conducts regular program reviews to assess whether
a proposed program is based on established needs, whether it duplicates other
programs in the same area, and whether it falls within an institution's mission.
A clear trend in all the systems considered in the study is the development
of a learning organization of some sort that is responsible for more than just
the assessment of existing providers. These organizations promote communications
among stakeholders and develop a clear link between education and professional
development on the one hand and the basic mission of the system on the other.
Corporate learning organizations describe this relationship as "becoming a
strategic partner" in the corporation. Such an organization facilitates dialogue
among key stakeholders, assembles information on workforce needs and existing
programs, and serves as an interface between customers and providers.
WHAT APPROACHES ARE USED TO ASSESS PROVIDERS AND CERTIFY
In reviewing a wide variety of assessment approaches, the Rand
study identified key similarities and differences among the approaches and
classified them into four basic models. The first model involves the use of an
intermediary organization that is responsible for reviewing the process used by
individual providers to assess their own quality and productivity. In the second
model, an intermediary organization conducts the actual assessment of providers.
In the third model, providers conduct their own assessment with no involvement
of an intermediary. The fourth model differs from the other three in that it
focuses on the learner rather than the provider and involves the certification
of student competencies. Each approach has strengths and weaknesses that make it
more appropriate for some circumstances than for others. For that reason, no one
approach can be considered a best practice. The best approach depends on the
context of the assessment.
HOW DOES ONE CHOOSE A MODEL?
Many organizations whose job
is to ensure the quality and productivity of education and professional
development activities can be described as intermediary organizations. An
intermediary is neither a provider of education and professional development nor
a direct consumer of the services of such providers; it is an entity that
promotes communication between the two. Models One, Two, and Four allow a role
for an intermediary and are therefore the most relevant to such entities.
Intermediaries might also wish to learn about the best practices under Model
Three, however, to serve as a clearinghouse of information useful to provider
institutions and to remain abreast of new assessment techniques initiated by
The study identified six factors as the most important to consider in
choosing an approach to assessing the quality and productivity of providers: (1)
purpose of the assessment (accountability versus improvement), (2) level of
authority, (3) level of resources, (4) centralization of operations, (5) system
heterogeneity, and (6) system complexity.
The key advantage of Model One is that it delegates to provider organizations
the task of defining goals, measuring outcomes, and evaluating outcomes. As a
result, this approach can accommodate a system with many diverse providers.
Because they have such control over their own assessment, providers are less
likely to resist the process and are more likely to use it to promote
improvements. The primary disadvantage of Model One relative to Model Two is
that it emphasizes improvement over accountability. Model Two is better suited
than Model One for accountability purposes, provided that the intermediary has
the authority to ensure compliance. The main drawback to Model Two is that any
approach imposed from an external organization runs the risk of focusing on
inappropriate measures and failing to reflect institutional goals. Although
Model Three is better suited for improvement, it does not include a role for an
intermediary, though it can evolve into a process with a role for
intermediaries. Model Four represents a completely different approach to
assessment, one that focuses attention on the learner rather than the provider.
Although Model Four focuses on student competencies, it indirectly holds
institutions accountable by withholding competency status from students who have
not received the requisite education from specific providers.
WHAT IS THE THREE-STEP PROCESS OF ASSESSMENT?
Regardless of the model selected, the study found that three key steps mustbe
included in any provider of student assessment: 1) Identifying goals ofthe
education activities under consideration; 2) Measuring the outcomesrelated to
those goals; and 3) Evaluating whether the outcomes meet thosegoals.
The Rand team's literature review revealed several broad lessons concerning
these steps. First, each step should be linked to the others, and the process as
a whole should be driven by the goals. It is especially important to avoid
selecting measures before or without defining goals. Second, developing measures
that relate to goals is a crucial if difficult step. It is often difficult to
find an adequate measure of achievement for a particular goal. It is usually
better to use an imperfect measure of a specific goal than it is to use a
perfect measure of something different, however. Third, the trend in assessment
is to focus less on input measures and more on process and outcome measures.
Measuring outcomes along may not result in improvement, but considering the
intervening processes that use resources to produce outcomes provides
information more useful to program improvement. Finally, except for certificate
or licensing programs, providers of professional development courses are not
likely to be able to rely on preexisting evaluation tools with known validity
and reliability characteristics. Rather, they will most likely have to develop
measures of learning outcomes on their own. The literature provides some
guidelines for developing such measures and for avoiding major sources of
invalidity and unreliability. Intermediaries can play an important role by
applying these guidelines to their own assessment processes and acting as
clearinghouses of such information for providers engaged in assessment.
Selected references appear below. Please see
ASHE-ERIC Report, vol. 29, no.1 for a complete list of references.
Cole, J.J.K., Nettles, M.T., and Sharp, S. (1997). Assessment of teaching and
learning for improvement and accountability: State governing, coordinating board
and regional accreditation association policies and practices. Ann Arbor:
National Center for Postsecondary Improvement, University of Michigan.
Ewell, P.T. (1999). Assessment of higher education and quality: Promise and
politics. In S.J. Messick (Ed.), Assessment in higher education: Issues of
access, quality, student development, and public policy. Mahwah, NJ: Erlbaum.
Gates, S.M.; Augustine, C.H.; Benjamin, R.; Bikson, T.K.; Kaganoff, T; Levy,
D.G.; Moini, J.S.; and Zimmer, R.W. (2002). Ensuring Quality and Productivity in
Higher Education: An Analysis of Assessment Practices. ASHE-ERIC Higher
Education Report (vol. 29, no.1 ). San Francisco, CA: Jossey-Bass, Inc.
Palomba, C.A., and Banta, T.W. (1999) Assessment essentials: Planning,
implementing, and improving assessment in higher education. San Francisco:
Schilling, K.M., and Schilling, K.K. (1998) Proclaiming and sustaining
excellence: Assessment as a faculty role. ASHE-ERIC Higher Education Report
(vol. 26, no.3). Washington, DC: The George Washington University.