ERIC Identifier: ED435711
Publication Date: 1999-09-00
Author: Scriven, Michael
Source: ERIC Clearinghouse on
Assessment and Evaluation Washington DC.
The Nature of Evaluation. Part II: Training. ERIC/AE Digest.
An earlier article addressed the role of evaluation, the basic logic, and a
description of how the field is structured. This article describes some of the
basic logic-of- evaluation skills and some of the basic methodological skills
that need to be mastered in order to practice the art and science of evaluation.
Much work in the Big Six evaluation fields - program personnel, performance,
policy, proposal, and product evaluation- falls within the area of applied
social psychology, and much ofthat e.g., the evaluation of large social
interventions would be impossible without training in the methods and
mathematics that foundations requirements in graduate psychology now cover. But
there is at least one other completely different kind of reason for thinking the
connection between psychology and evaluation is an intimate one, namely the
highly specific phenomena of reactions to evaluation by those being evaluated
and those for whom the evaluation is done. Dealing with these is an important
part of developing applied skills in evaluation. However, the standard training
provided in standard psychology programs will not put the graduate in a position
where/he can deal competently with common phenomena in evaluation. Nor should
this be regarded as a matter for clinical training, although it is related, and
although there are times when the phenomenology comes very close to the
clinically relevant level.
The following list indicates
some of the topics from the logic of evaluation that must also be dealt with in
1.Understanding the differences and connections between evaluation and other
kinds of research and investigation, especially: description,
classification/diagnosis, generalization, prediction, explanation,
justification, and recommendation. Hence, understanding the different types of
research design and data inputs required for each of these.
2.Understanding the difference between: (I) grading, ranking, scoring, and
apportioning (the basic evaluative procedures); (ii)merit (or quality), worth
(or value), and significance (or importance) the basic evaluative predicates.
Hence, understanding the differences between investigative designs aimed at
establishing conclusions of these (theoretically 12, but actually about 6)
different types. Specific case: understanding the function of 'significance
levels' in statistics by contrast with significance determination in scientific
or social research.
3.Understanding the arguments that purported to establish the impossibility
of scientific demonstrations of evaluative conclusions, and the reasons they
failed. (The 'Science is only descriptive' argument; the 'Values are always
subjective' argument; the 'Naturalistic fallacy' argument.) Understanding why
the usual arguments against value-free science also fail (the 'Scientists show
their values in choosing their field/research problems' argument; the 'Science
issued for good or bad purposes ' argument.) Understanding why these arguments
are not just philosophical exercises but reflections of common client/audience
confusions that need to be dealt with.
4.Understanding the difference between (I) holistic (black box)evaluation
(ii) analytic evaluation; and between the three kinds of analytic evaluation
dimensional, component, and theory-driven evaluation; and how to choose between
them in approaching a particular evaluation problem.
5.Understanding the formative/summative distinction, and some of the
arguments for thinking that a third category should be included to make up a
complete classification of all evaluations.
6.Understanding the nature of needs assessment and its difference from market
research; and how to design a valid needs assessment.
7.Understanding the logic of checklists, especially the difference between
checklists of (I) desiderata and (ii) necessitata; and the logical requirements
for validity of each kind.
8.Understanding the differences and connections between objectivity and: (I)
bias, (ii) preference/valuing/valencing; (iii) commitment; (iv) expertise. The
fallacy of irrelevant expertise in selecting evaluators. The views of realists
and constructivists about objectivity.
9. Understanding the range of evaluation approaches on the scale from fully
distanced to highly interactive, and the 'off-scale' entries of description and
evaluation training; all with their attendant advantages and disadvantages.
10.Understanding the difference between the kind of evidence required to
establish causation and that required to demonstrate culpability.
11. Understanding how and why evaluation developed from (I) a practice to
(ii) a highly skilled/professional practice to (iii)a field-specific discipline
and finally (iv) to a transdiscipline.
12.Understanding how evaluation theory developed from the primitive
identification of evaluation with monitoring to its present complex form,
including goal-free evaluation; and understanding some of the leading positions
taken by influential theorists along the way and today.
The following is a list of a list of
some methodological skills of great importance in evaluation which are rarely,
if ever, covered in the core curriculum of psychology graduate curricula.
1.The Key Evaluation Checklist approach, including details of how to
determine the five mainline checkpoints (Outcomes, Process, Costs, Comparisons,
2.Meta-evaluation procedures; the four approaches (recheck, redo, do
differently, special checklists).
3.Cost analysis, especially of non-money costs.
4.Skills from qualitative research, notably the determination of causality in
non-experimental research, e.g., in medicine (the lung cancer case and the
paresis case), and in history (the causes of unpreparedness at Pearl Harbor).
5.Some intradisciplinary skills, especially theory evaluation.
6.How to identify relevant values for a particular evaluation and deal with
highly controversial values and issues e.g., in evaluating family planning
programs, or in dismissal procedures.
7.How to report to non-peer clients, stakeholders and audiences, especially
using non-text media.
8.The psychology of evaluation, especially managing evaluation anxiety.
9.Some field-specific skills, in e.g., technology assessment, personnel
evaluation, business evaluation, non-profit management, developmental
evaluation, proposal evaluation, evaluative questionnaire design, etc.
Chelimsky, E. and Shadish W.R. (eds.) Evaluation for the 21st Century : A
Handbook. Sage Publications.
Joint Committee on Standards for Educational Evaluation (1998). Program
Evaluation Standards : How to Assess Evaluations of Educational Programs. Corwin
Scriven, M. (1991). Evaluation Thesaurus 4th edition. Sage Publications.
Shadish W. R. (Chair) (1998). Guiding Principles for Evaluators. A Report
from the American Evaluation Association Task Force on Guiding Principles for
Evaluators. [available online
Shadish, W.R. (1998). Some Evaluation Questions. ERIC/AE Digest TM-98-05.