Assessment Skills of Counselors, Principals, and
Teachers. ERIC Digest.
by Impara, James C.
There are several methods one might use to determine the level of skills
and knowledge of educational practitioners in the area of student assessment.
One method is to survey various groups of education professionals and ask
them to self-report on the extent of their knowledge (or their confidence)
in skills associated with student assessment. This is the approach typically
taken by researchers who have investigated the topic among counselors (Elmore,
Ekstrom, & Diamond, 1993), principals, and teachers (Fennessey, 1982;
Infantino, 1976). A second way to undertake research in this area is to
develop a test of assessment skills and knowledge and administer it to
groups of counselors, principals, and teachers. This approach was used
by Impara, Divine, Bruce, Liverman & Gay (1991) and by Impara and Plake
(in press). A third method, particularly suitable for teachers, is to examine
the tests they develop and infer their knowledge of principles of test
construction (Gullickson & Ellwein, 1985); this method provides only
limited information about their knowledge of assessment skills.
A precursor to measuring the assessment skills of educational professionals
is identifying the skills to be measured. This might be done by undertaking
a job analysis, e.g., asking counselors, principals, and teachers what
assessment skills and knowledge they need to perform their job. Another
way is to seek appropriate professional standards that might define the
scope and level of assessment skills and knowledge needed.
STANDARDS FOR ASSESSMENT
The major, and most general, standards are the Standards for Educational
and Psychological Testing, (American Educational Research Association (AERA),
American Psychological Association (APA), & National Council on Measurement
in Education (NCME), 1985). More directly relevant to assessment skills
are the standards that have been (or are being) developed by professional
organizations responsible for certifying or otherwise imposing some degree
of control or direction over the profession. Among the standards developed
for counselors that are relevant to assessment are: Responsibilities of
Users of Standardized Tests (American Association for Counseling and Development
(AACD)/Association for Measurement and Evaluation in Counseling and Development
(AMECD), 1989); Ethical Standards (AACD, 1988) (currently under revision);
and the CACREP Accreditation Standards (Council for Accreditation of Counseling
and Related Educational Programs, 1994).
In a joint endeavor the American Federation of Teachers (AFT), NCME,
and the National Education Association (NEA) produced the Standards for
Teacher Competence in Educational Assessment of Students (1990). In a follow
up to that effort the American Association of School Administrators (AASA),
National Association of Elementary School Principals (NAESP), National
Association of Secondary School Principals NASSP), & NCME have drafted
the Competency Standards in Student Assessment for Educational Administrators.
(these standards should be available from the participating organizations
by mid 1995)
THE RESEARCH FINDINGS ON SKILLS AND KNOWLEDGE
OF EDUCATIONAL PROFESSIONALS
Elmore et al. (1993) surveyed counselors, in-part to collect information
related to the measurement dimensions of the Ethical Standards (AACD, 1988).
The questionnaire asked counselors about their level of confidence associated
with undertaking various assessment activities. The results indicated that
many counselors feel highly confident about using test results (69%), selecting
tests (67%), administering tests (90%), and interpreting test scores (72%).
Counselors also reported high levels of confidence in using test norms
(72%), using statistics like the mean, standard deviation, and correlation
(67%), using test reliability and validity information (59%), and using
the standard error of measurement (58%) (Elmore et al., 1993, p.118).
Impara et al. (1991) investigated the extent that elementary and secondary
teachers' interpretation of a standardized test score report from a state
testing program was aided by the interpretative information provided by
the scoring service. They found that teachers who had the interpretive
information made fewer errors responding to test questions based on the
score report than did teachers who did not have the benefit of interpretive
information. (14 of 17 correct vs. 12 of 17 correct) The most difficult
items for all the teachers related to interpreting percentile bands. Some
teachers, especially those at the secondary level, commented that they
did not have to know how to interpret test scores because they could rely
on the school counselors to interpret and explain test scores to students.
In a later study, Impara and Plake (in press) obtained responses from
over 900 Virginia educators (balanced about equally among counselors, principals,
and teachers at both elementary and secondary levels) on a test developed
using as test specifications the Standards for Teacher Competence in Educational
Assessment of Students (AFT, NCME, & NEA, 1990). Counselors' strengths
were associated with items relating to test selection, validity, communication
of assessment results and ethical practices. Unlike both principals and
teachers, counselors showed particular strength in their basic understanding
of the concept of reliability and measurement error, and their ability
to interpret scores from standardized tests. In contrast to counselors,
both principals and teachers more often confused reliability and validity.
Principals showed strength in understanding the bases for selecting
an assessment strategy and the methods for determining validity. Most principals
also answered correctly items addressing communication of test results,
but (like teachers and counselors) were less proficient in the interpretation
of standardized test results. Finally, principals' scores were very high
on the items measuring the recognition of ethical practices.
Although teachers' strengths were similar to those identified for principals
and counselors, many teachers (about 37%) did not understand the correct
interpretation of grade equivalent scores. All respondents had problems
understanding how to combine scores from individual assessments, e.g.,
several tests, into a single summary grade. As in Impara et al. (1991),
many teachers, especially those in secondary schools, indicated they rely
on counselors to provide interpretations of standardized tests.
In terms of the overall performance of the different levels of professionals
in this study, the counselors at both elementary and secondary levels and
the elementary principals received higher scores than did either the teachers
or secondary principals. It is clear that teachers rely on counselors and
that this group of professionals is expected to serve in a consulting role
to other professionals within the school in many matters of testing and
assessment, especially when dealing with formal testing programs. In elementary
schools where counselors are least likely to be available, principals may
need to serve in the same consultative capacity as counselors do in high
schools, so they, too, must be adequately prepared to assist teachers in
matters related to formal testing programs. As a group, however, none of
the professionals surveyed are well prepared in the development and use
of assessments at the classroom level.
SUMMARY AND CONCLUSIONS
The findings from Elmore et al. (1993), Impara, et al., (1991) and Impara
& Plake (in press) parallel each other and those from the self-report
studies reported by other researchers in that many educational professionals
have some knowledge of assessment practices, ranging from principles of
test development and use to the practices associated with the use and interpretation
of standardized and teacher-made tests. The skill levels associated with
many important student assessment principles is, however, not consistent
with the Standards adopted by professional organizations.
The various standards that have been developed and endorsed by the
professional associations in education are important documents and they
provide excellent guides for the professional development of educators
who work with assessment information on a regular basis. Clearly the assessment
skills and knowledge of counselors, principals, and teachers are lacking
in some important areas while in other important areas these educational
professionals are highly skilled and knowledgeable.
American Association for Counseling and Development. (1988). Ethical
Standards. Alexandria, VA: Author.
American Association for Counseling and Development/Association for
Measurement and Evaluation in Counseling and Development. (1989, May).
Responsibilities of users of standardized tests. Guidepost, 12, 16, 18,
American Federation of Teachers, National Council on Measurement in
Education and National Education Association. (1990). Standards for Teacher
Competence in Educational Assessment of Students. Washington, DC: American
Federation of Teachers.
American Psychological Association, American Educational Research Association
and National Council on Measurement in Education. (1985). Standards for
Educational and Psychological Testing. Washington, DC: American Psychological
Council for Accreditation of Counseling and Related Educational Programs.
(1994, January). The CACREP Accreditation Standards and Procedures Manual.
Alexandria, VA. Author.
Elmore, P. B., Ekstrom, R., & Diamond, E. E. (1993) Counselors'
test use practices: Indicators of the adequacy of measurement training.
Measurement and Evaluation in Counseling and Development, 26(2),116-124.
Fennessey, D. (1982, July). Primary teachers' assessment practices:
Some implications for teacher training. Paper presented at the annual meeting
of the South Pacific Assn. for Teacher Education, Frankston, Victoria,
Australia. (ED 229 346).
Gullickson, A. R. & Ellwein, M. C. (1985). Post hoc analysis of
teacher-made tests: The goodness-of-fit between prescription and practice.
Educational Measurement: Issues and Practice, 4 (1), 15-18.
Impara, J. C.; Divine, K. P.; Bruce, F. A.; Liverman, M. R. & Gay,
A. (1991). Teachers' ability to interpret standardized test scores. Educational
Measurement: Issues and Practice, 10 (4), 16-18.
Impara, J. C., & Plake, B. S. (In press). Comparing counselors',
school administrators', and teachers' knowledge in student assessment.
Measurement and Evaluation in Counseling and Development.
Infantino, R. L. (1976). Testing and accountability: A survey of the
knowledge and attitudes of New York State secondary school English teachers.
Doctoral dissertation, State University of New York at Buffalo. (University
Microfilms No. 77-614).
James C. Impara is Professional Associate, Buros Institute of Mental
Measurements, University of Nebraska-Lincoln, Lincoln, NE.