ERIC Identifier: ED481715
Publication Date: 2003-06-00
Author: Moskal, Barbara M
Source: ERIC Clearinghouse on Assessment and Evaluation
Developing Classroom Performance Assessments and Scoring
Rubrics - Part II. ERIC Digest.
A difficulty that is faced in the use of performance assessments is
determining how the students' responses will be scored. Scoring rubrics
provide one mechanism for scoring student responses to a variety of different
types of performance assessments. This two-part Digest draws from the current
literature and the author's experience to identify suggestions for developing
performance assessments and their accompanying scoring rubrics.
This Digest addresses 1) Developing Scoring Rubrics, 2) Administering
Performance Assessments and 3) Scoring, Interpreting and Using Results.
Another Digest addresses Writing Goals and Objectives, and Developing Performance
Assessments. These categories guide the reader through the four phases
of the classroom
assessment process planning, gathering, interpreting and using (Moskal,
current article assumes that the reader has a basic knowledge of both
assessments and scoring rubrics.
DEVELOPING SCORING RUBRICS
Scoring rubrics are one method that may be used to evaluatestudents'
performance assessments. Two typesof performance assessments are frequently
discussed in theliterature: analytic and holistic. Analytic scoring
performance into separate facets and each facet is evaluatedusing a
Holistic scoring rubrics use a single scaleto evaluate the larger process.
scoring rubrics, all ofthe facets that make-up the task are evaluated
in combination.The recommendations that follow are appropriate to bothanalytic
and holistic scoring rubrics.
Recommendations for developing scoring rubrics:
1. The criteria set forth within a scoring rubric should beclearly aligned
requirements of the task and the statedgoals and objectives. As was
discussed earlier, a list can becompiled that describes how the elements
of the task map intothe goals and objectives. This list can be extended
to include how the criteria that is both analytic and holistic, is immediately
available through this journal. Mertler (2001) and
Moskal (2000b) have both described the differences between analytic
and holistic scoring rubrics and how to develop each type of rubric. Books
have also been written or compiled (e.g., Arter & McTighe, 2001; Boston,
2002) that provide detailed examinations of the rubric development process
and the different types of scoring rubrics.
ADMINISTERING PERFORMANCE ASSESSMENTS
Once a performance assessment and its accompanying scoring rubric are
developed, it is time to administer the assessment to students. The recommendations
that follow are specifically developed to guide the administration process.
Recommendations for administering performance assessments:
1. Both written and oral explanations of tasks should be clear and concise
presented in language that the students understand. If the task is
presented in written
form, then the reading level of the students should be given careful
Students should be given the opportunity to ask clarification questions
completing the task.
2. Appropriate tools need to be available to support the completion
of the assessment activity. Depending on the activity, students may need
access to library resources, computer programs, laboratories, calculators,
or other tools. Before the task is administered, the teacher should determine
what tools will be needed and ensure that these tools are available during
the task administration.
3. Scoring rubrics should be discussed with the students before they
assessment activity. This allows the students to adjust their efforts
in a manner that
maximizes their performance. Teachers are often concerned that by giving
the students the criteria in advance, all of the students will perform
at the top level. In practice, this rarely (if ever) occurs.
The first two recommendations provided above are appropriate well beyond
the use of performance assessments and scoring rubrics. These recommendations
are consistent with the Standards of the American Educational Research
Association, American Psychological Association & National Council
on Measurement in Education (1999) with respect to assessment and evaluation.
The final recommendation is consistent with prior articles that concern
the development of scoring rubrics (Brualdi, 1998; Moskal & Leydens,
SCORING, INTERPRETING AND USING RESULTS
As was discussed earlier, a scoring rubric may be used to score student
responses to performance assessments. This section provides recommendations
for scoring, interpreting and using the results of performance assessments.
Recommendations for scoring, interpreting and using results of performance
1. Two independent raters should be able to acquire consistent scores
categories described in the scoring rubric. If the categories of the
scoring rubric are
written clearly and concisely, then two raters should be able to score
the same set of
papers and acquire similar results.
2. A given rater should be able to acquire consistent scores across
time using the
scoring rubric. Knowledge of who a student is or the mood of a rater
on a given day may impact the scoring process. Raters should frequently
refer to the scoring rubric to ensure that they are not informally changing
the criteria over time.
3. A set of anchor papers should be used to assist raters in the scoring
process. Anchor papers are student papers that have been selected as examples
of performances at the different levels of the scoring rubric. These papers
provide a comparison set for raters as they score the student responses.
Raters should frequently refer to these papers to ensure the consistency
of scoring over time.
4. A set of anchor papers with students' names removed can be used to
both students and parents the different levels of the scoring rubric.
Ambiguities within the rubric can often be clarified through the use of
examples. Anchor papers with students names removed can be used to clarify
to both students and parents the expectations set forth through the scoring
5. The connection between the score or grade and the scoring rubric
immediately apparent. If an analytic rubric is used, then the report
should contain the
scores for each analytic level. If a summary score or grade is provided,
explanation should be included as to how the summary score or grade
was determined. Both students and parents should be able to understand
how the final grade or score is linked to the scoring criteria.
6. The results of the performance assessment should be used to improve
and the assessment process. What did the teacher learn from the student
How can this be used to improve future classroom instruction? What did
learn about the performance assessment or the scoring rubric? How can
instruments be improved for future instruction? The information that
is acquired through classroom assessment should be actively used to improve
future instruction and assessment.
The first three recommendations concern the important concept of "rater
reliability" or the consistency between scores. Moskal and Leydens (2000)
examine the concept of rater reliability in an article that was previously
published in this journal. A more comprehensive source that addresses both
validity and reliability of scoring rubrics is a book by Arter and McTighe
(2001), Scoring Rubrics in the Classroom: Using Performance Criteria for
Assessing and Improving Student Performance. The American Educational Research
Association, American Psychological Association and National Council of
Measurement in Education (1999) also address these issues in their Standards
document. For information concerning methods for converting rubric scores
to grades, see "Converting Rubric Scores to Letter Grades" (Northwest Regional
Educational Laboratory, 2001).
The purpose of this article is to provide a set of recommendations for
the development of performance assessments and scoring rubrics. These recommendations
can be used to guide a teacher through the four phases of classroom assessment,
planning, gathering, interpreting and using. Extensive literature is available
on each phase of the assessment process and this article addresses only
a small sample of that work. The reader is encouraged to use the previously
cited work as a starting place to better understand the use of performance
assessments and scoring rubrics in the classroom.
This article was originally developed as part of a National Science
Foundation (NSF) grant (EEC 0230702), Engineering Our World. The opinions
and ideas expressed in this article are that of the author and not of the
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University
of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom."
Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom."
Assessment, Research & Evaluation, 7(25). Available online:
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical
Assessment, Research & Evaluation, 7(3) [On-line]. Available:
Northwest Regional Educational Laboratory (2002). "Converting Rubric
Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring
Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on
Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring
Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13).
University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan
Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass