ERIC Identifier: ED376998
Publication Date: 1994-11-00
Author: Hendricks, Bruce
Source: ERIC Clearinghouse on
Rural Education and Small Schools Charleston WV.
Improving Evaluation in Experiential Education. ERIC Digest.
Although experiential education is really the oldest approach to learning,
its practitioners have not had an easy time justifying its relevance in the
educational world of the twentieth century. Experiential educators promote
learning through participation, reflection, and application to situations of
consequence (Hunt, 1990, pp. 119-128). Although its practitioners are convinced
of the effectiveness of this approach, skepticism persists outside the field.
In current usage, "assessment" is perhaps the general term, referring most
often to an examination of the processes and contexts that influence learning
(Eisner, 1993). "Evaluation" (a word that indicates estimation of value or
worth) is increasingly used to estimate the worth of the results of a program or
Recent changes in the methodologies of evaluation, however, have provided
useful tools for experiential educators. Such tools can be used to refine
programming, enhance student learning, and perhaps improve the credibility of
the field--important when organizations compete for limited funding (Bennet,
1988; Flor, 1991).
NEW DEVELOPMENTS IN ASSESSMENT AND EVALUATION
developments in evaluation are critical for the future of experiential
education, almost 40 years of assessment and evaluation have shown that many
experiential and outdoor education programs are effective in positively
impacting individuals and society. Demonstrated effects include enhanced
self-concept, reduced rates of recidivism, and effectiveness in treating
chemical dependency (Ewert, 1989).
During this time, however, much has changed in educational assessment and
evaluation, including both the questions asked and the methods used to answer
them. Researchers have, in recent years, finally begun to appreciate the
complexity of the educational process. Learning is no longer considered largely
a matter of organizing appropriate sets of stimuli and responses, nor is the
mind still viewed as an impenetrable "black box." Instead, much greater
appreciation prevails for the role that learners take in actively constructing
their own learning.
Educators (from practitioners to theorists) are giving up the idea that they
can dissect, predict, and control learning with technological precision. As a
result, qualitative approaches to assessment and evaluation are becoming more
common, usually in addition to--and even in place of--quantitative approaches.
Robottom (1989, p. 430) argues that "researchers tend to study only what they
can measure: educational research is the art of the measurable." The use of
multiple methods helps address this problem by providing an assortment of tools,
not all of which need to be measurement focused.
While evaluation methods in the past did an adequate job of providing
evidence of the effectiveness of experiential learning techniques (e.g., Cason
& Gillis, 1994), the current challenge is to develop methods that will help
answer questions about how experiential education works, including the transfer
of experiential learning to other contexts. Future evaluation efforts should
build on what is already known, rather than limit itself to replicating
well-established methods and findings (Ewert, 1989).
Eisner (1993, pp. 226-232) presents one influential new framework for
evaluation, consisting of "eight criteria in search of practice." These criteria
are, in fact, consistent with the premises of experiential education programs.
According to Eisner, evaluation tasks should:
reflect real world needs, by increasing students' problem-solving abilities and
ability to construe meaning;
reveal how students solve problems, not just the final answer, since reasoning
determines students' ability to transfer learning;
reflect values of the intellectual community from which the tasks are derived,
thus providing a context for learning and enhancing retention, meaning, and
not be limited to solo performances, since much of life requires an ability to
work in cooperation with others;
allow more than one way to do things or more than one answer to a question,
since real-life situations rarely have only one correct alternative;
promote transference by presenting tasks that require students to intelligently
adapt modifiable learning tools;
require students to display an understanding of the whole, not just the parts;
allow students to choose a form of response with which they are comfortable.
ONE SIZE DOES NOT FIT ALL
As with Eisner's criteria,
evaluation methods must stress both appropriateness and versatility so that they
are consistent with the needs and context of the evaluation (Eisner, 1993;
Robottom, 1989). Moreover, the reliability, clarity, and usefulness of findings
are usually improved if the evaluator engages "methodological pluralism"--the
use of more than one evaluation method (Eisner, 1993; Ewert, 1987). For example,
a naturalistic (qualitative) inquiry method may be used to find out what
participants see as significant in their experiences, whereas a rationalistic
(quantitative) approach might be more appropriate to assess the relationship
between demographics and enrollment within the same review.
INCREASED COLLABORATION BETWEEN EVALUATORS AND
Good evaluation also depends on improving relationships
between practitioners and evaluators. The two groups have not always understood
Planning. For example, researchers working in an evaluative role have often
produced findings related more to theory testing than to decision making. Such
results hold little value for most practitioners. Practitioners, in their turn,
have frustrated evaluators with their concern to establish the value of
individual programs, rather than understanding the need for evaluations with a
broader focus, which might benefit the larger field of experiential education
If practitioners and evaluators work collaboratively during the planning
stages of an evaluation, the quality of evaluation design and the applicability
of findings will almost certainly improve. It is at this foundational level
where teams can develop data-gathering methods and terms of reference
(objectives) that recognize the particular needs and interests of both groups
(Hendricks & Cooney, 1992).
Communicating findings. In addition to planning, it is important for
evaluators and practitioners to collaborate in communicating and applying
evaluation findings. According to Stahl (1991, p. 293), "One frequent criticism
of educational research [and evaluation] is its remoteness from educational
practice." Several reasons have been suggested for the poor exchange of
information between the two groups:
Practitioners rarely read educational research and evaluation journals because
jargon, technical language, and use of statistics render this literature
inaccessible to them (Stahl, 1991; Ewert, 1987).
Some evaluators are interested only in results that can be measured quickly and
easily. Such an approach ensures more rapid publication, an important concern of
those pursuing academic careers. But such an approach must sidestep many
questions of educational relevance. Therefore, studies conducted on this basis
are apt to yield little in the way of relevant, applicable knowledge for
practitioners (Stahl, 1991).
Many evaluations are designed and the results reported in ways that either
confuse or intimidate practitioners or make it difficult for them to modify
programs (Stahl, 1991).
Some evaluators have addressed this issue by designing delivery formats
specifically to increase understanding and application by practitioners. Stahl
(1991) found that packaging his evaluation findings in the form of an
illustrated and annotated report (combined with the form of a training manual)
allowed practitioners and researchers alike to benefit from its content. Stahl
identified several major impediments that keep evaluation findings from being
read. For each of these difficulties, he developed a response intended to
increase readers' interest and ease of use.
of figures and statistical tables
introduction to the findings in which readers are expressly told they do not
need to read any of the statistical tables. Instead they can read the main
findings in words at the bottom of each table.
use of a simple, straightforward language avoiding the use of jargon and
technical terms. In places where special terms must be used, they are explained.
illustrated with concrete examples, such as passages taken from interviews in
inclusion of cartoons, jokes, proverbs, and quotations to provide illustrations
of findings and how they might be applied.
Evaluators and practitioners are striving to improve the
future of experiential education through innovative and practical approaches to
educational evaluation. There is much to do and much room for improvement. New
evaluation methods, increased collaboration, and creative methods of
disseminating findings are some of the important processes underway. In "Experience and Education," published in 1938, John Dewey commented that, in
most cases, schooling stood in the way of learning. In order to make
intellectual progress, he noted, we mostly have to unlearn what we learned in
Our goal as experiential educators is not just to help people learn
differently but to help them learn better. Continuing improvements in evaluation
can help us toward that goal.
Bennet, D. B. (1988). Four steps to evaluating
environmental education learning experiences. Journal of Environmental
Education, 20(2), 14-21.
Cason, D., & Gillis, H. (1994). A meta-analysis of outdoor adventure
programming with adolescents. Journal of Experiential Education, 17(1), 40-47.
Dewey, J. (1938). Experience and Education. New York: Collier Books.
Eisner, E. W. (1993). Reshaping assessment in education: Some criteria in
search of practice. Journal of Curriculum Studies, 25(3), 219-233.
Ewert, A. W. (1989). Outdoor Adventure Pursuits: Foundations, Models and
Theories. Columbus, OH: Publishing Horizons, Inc.
Ewert, A. (1987). Research in experiential education: An overview. Journal of
Experiential Education, 10(2), 4-7.
Flor, R. (1991). An introduction to research and evaluation in practice.
Journal of Experiential Education, 14(1), 36-39.
Hendricks, B., & Cooney, D. (1992). Charting the future: Program review
and evaluation as tools for growth. In G. Hanna (Ed.), Celebrating our
tradition, charting our future: Proceedings of the 1992 Association for
Experiential Education Conference (pp. 249-259). Boulder, CO: Association for
Experiential Education. (ED 353 122)
Hunt, J. S. (1991). Philosophy of adventure education. In Miles, J. C., &
Priest, S. (Eds.) Adventure Education. State College, PA: Venture Publishing,
Robottom, I. (1989). Social critique or social control: Some problems for
evaluation in environmental education. Journal of Research in Science Teaching,
Stahl, A. (1991). Bridging the gap between research and teacher education: An
Israeli innovation. Journal of Education for Teaching, 17(30), 293-299.