Developing Classroom Performance Assessments and Scoring Rubrics - Part I. ERIC Digest.
A difficulty that is faced in the use of performance assessments is determining how the students' responses will be scored. Scoring rubrics provide one mechanism for scoring student responses to a variety of different types of performance assessments. This two-part Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics.
The suggestions are divided into five categories:
1) Writing Goals and Objectives,
2) Developing Performance Assessments,
3) Developing Scoring Rubrics,
4) Administering Performance Assessments and
5) Scoring, Interpreting and Using Results.
"This Digest addresses the first two categories. Another Digest addresses the last
These categories guide the reader through the four phases of the classroom
assessment process planning, gathering, interpreting and using (Moskal, 2000a). The
list of suggestions provided throughout this paper are specific to formal assessment
activities as opposed to informal assessment activities (Stiggins, 1994). Formal
assessment activities refer to activities in which the students are aware that they are
being evaluated; Informal assessment activities refer to activities in which the students are not aware that they are being evaluated (Stiggins, 1994). Although some of these suggestions are appropriate for informal assessments, the primary focus of this paper is upon formal assessment activities.
The current article assumes that the reader has a basic knowledge of both performance assessments and scoring rubrics. If these assumptions are incorrect, the reader may wish to review prior articles on performance assessments and scoring rubrics before reading this article. Brualdi 's article (1998), "Implementing performance assessment in the classroom", provides an introduction to performance assessments and how they may be used in the classroom. Moskal (2000b) discusses the basics of scoring rubric development in her article, "Scoring Rubrics: What, When and How?" In the article "Designing scoring rubrics for your classroom," Mertler (2001) outlines how to develop and implement scoring rubrics in the classroom.
WRITING GOALS AND OBJECTIVES
Before a performance assessment or a scoring rubric is written or selected, the teacher should clearly identify the purpose of the activity. As is the case with any assessment, a clear statement of goals and objectives should be written to guide the development of both the performance assessment and the scoring rubric. "Goals" are broad statements of expected student outcomes and "objectives" divide the goals into observable behaviors (Rogers & Sando, 1996). Questions such as, "What do I hope to learn about my students' knowledge or skills?," "What content, skills and knowledge should the activity be designed to assess?," and "What evidence do I need to evaluate the appropriate skills and knowledge?", can help in the identification of specific goals and objectives.
Recommendations for writing goals and objectives:
1. The statement of goals and accompanying objectives should provide a clear focus for both instruction and assessment. Another manner in which to phrase this
recommendation is that the stated goals and objectives for the performance
assessment should be clearly aligned with the goals and objectives of instruction.
Ideally, a statement of goals and objectives is developed prior to the instructional
activity and is used to guide both instruction and assessment.
2. Both goals and objectives should reflect knowledge and information that is worthwhile for students to learn. Both the instruction and the assessment of student learning are intentional acts and should be guided through planning. Goals and objectives provide a framework for the development of this plan. Given the critical relationship between goals and objectives and instruction and assessment, goals and objectives should reflect important learning outcomes.
3. The relationship between a given goal and the objectives that describe that goal
should be apparent. Objectives lay the framework upon which a given goal is evaluated.
Therefore, there should be a clear link between the statement of the goal and the
objectives that define that goal.
4. All of the important aspects of the given goal should be reflected through the
objectives. Once again, goals and objectives provide a framework for evaluating the
attainment of a given goal. Therefore, the accompanying set of objectives should reflect the important aspects of the goal.
5. Objectives should describe measurable student outcomes. Since objectives provide the framework for evaluation, they need to be phrased in a manner that specifies the student behavior that will demonstrate the attainment of the larger goal.
6. Goals and objectives should be used to guide the selection of an appropriate
assessment activity. When the goals and objectives are focused upon the recall of
factual knowledge, a multiple choice or short response assessment may be more
appropriate and efficient than a performance assessment. When the goals and
objectives are focused upon complex learning outcomes, such as reasoning,
communication, teamwork, etc., a performance assessment is likely to be appropriate (Perlman, 2002).
Writing goals and objectives, at first, appears to be a simple. After all, this process
primarily requires clearly defining the desired student outcomes. Many teachers initially have difficulty creating goals and objectives that can be used to guide instruction and that can be measured. An excellent resource that specifically focuses upon the "how to" of writing measurable objectives is a book by Gronlund (2000). Other authors have also addressed these issues in subsections of l arger works (e.g., Airasian, 2000; 2001; Oosterhoff, 1999).
DEVELOPING PERFORMANCE ASSESSMENT
As the term suggests, performance assessments require a demonstration of students'
skills or knowledge (Airasian, 2000; 2001; Brualdi, 1998; Perlman, 2002).
Performance assessments can take on many different forms, which include written and oral demonstrations and activities that can be completed by either a group or an individual.
A factor that distinguishes performance assessments from other extended response
activities is that they require students to demonstrate the application of knowledge to a particular context (Brualdi, 1998; Wiggins, 1993). Through observation or analysis of a student's response, the teacher can determine what the student knows, what the student does not know and what misconceptions the student holds with respect to the purpose of the assessment.
Recommendations for developing performance assessments:
1. The selected performance should reflect a valued activity. According to Wiggins
(1990), "The best tests always teach students and teachers alike the kind of work that most matters; they are enabling and forward-looking, not just reflective of prior teaching." He suggests the use of tasks that resemble the type of activities that are known to take place in the workforce (e.g., project reports and presentations, writing legal briefs, collecting, analyzing and using data to make and justify decisions). In other words, performance assessments allow students the opportunity to display their skills and knowledge in response to "real" situations (Airasian, 2000; 2001; Wiggins, 1993).
2. The completion of performance assessments should provide a valuable learning
experience. Performance assessments require more time to administer than do other
forms of assessment. The investment of this classroom time should result in a higher
payoff. This payoff should include both an increase in the teacher's understanding of
what students know and can do and an increase in the students' knowledge of the
intended content and constructs.
3. The statement of goals and objectives should be clearly aligned with the measurable outcomes of the performance activity. Once the task has been selected, a list can be made of how the elements of the task map into the desired goals and objectives. If it is not apparent as to how the students' performance will be mapped into the desired goals and objectives, then adjustments may need to be made to the task or a new task may need to be selected.
4. The task should not examine extraneous or unintended variables. Examine the task and think about whether there are elements of the task that do not map directly into the goals and objectives. Is knowledge required in the completion of the task that is
inconsistent with the purpose? Will lack of this knowledge interfere or prevent the
students from completing the task for reasons that are not consistent with the task's
purpose? If such factors exist, changes may need to be made to the task or a new task may need to be selected.
5. Performance assessments should be fair and free from bias. The phrasing of the task should be carefully constructed in a manner that eliminates gender and ethnic
stereotypes. Additionally, the task should not give an unfair advantage to a particular
subset of students. For example, a task that is heavily weighted with baseball statistics may give an unfair advantage to the students that are baseball enthusiasts.
The recommendations provided above have been drawn from the broader literary base concerning the construction of performance assessments. The interested reader can acquire further details concerning the development process by consulting other articles that are available through this journal (i.e., Brualdi, 1998; Roeber, 1996; Wiggins, 1990) or books (e.g., Wiggins, 1993; 1998) that address this subject.
Boston, C. (Eds.). (2002). Understanding Scoring Rubrics. University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brualdi, A. (1998). "Implementing performance assessment in the classroom." Practical Assessment, Research & Evaluation, 6(2) [On-line]. Available:
Mertler, C. A. (2001). "Designing scoring rubrics for your classroom." Practical
Assessment, Research & Evaluation, 7(25). Available online:
Moskal, B. (2000a) "An Assessment Model for the Mathematics Classroom."
Mathematics Teaching in the Middle School, 6 (3), 192-194.
Moskal, B. (2000b). "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation, 7(3) [On-line]. Available:
Northwest Regional Educational Laboratory (2002). "Converting Rubric Scores to Letter Grades." In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 34-40). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Perlman, C. (2002). "An Introduction to Performance Assessment Scoring Rubrics". In C. Boston's (Eds.), Understanding Scoring Rubrics (pp. 5-13). University of Maryland, MD: ERIC Clearinghouse on Assessment and Evaluation.
Rogers, G. & Sando, J. (1996). Stepping Ahead: An Assessment Plan Development Guide. Terra Haute, Indiana: Rose-Hulman Institute of Technology.
Rudner, L.M. & Schafer, W.D. (Eds.). (2002). What Teachers Need to Know about Assessment. Washington, DC: National Education Association.
Stiggins, R. (1994). Student-Centered Classroom Assessment. New York: Macmillan Publishing Company.
Wiggins, G. (1990). "The case for authentic assessment." Practical Assessment,
Research & Evaluation, 2(2). Available online:
Wiggins, G. (1993). Assessing Student Performances. San Francisco: Jossey-Bass
Library Reference Search Web Directory
This site is (c) 2003-2005. All rights reserved.
Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit. Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC. No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.