Scoring Rubrics Part I: What and When. ERIC Digest.

by Moskal, Barbara M.

Scoring rubrics have become a common method for evaluating student work in both K-12 and college classrooms. The purpose of this Digest is to describe the different types of scoring rubrics and explain why scoring rubrics are useful. A companion Digest provides a process for developing scoring rubrics and describes resources that contain examples of the different types of scoring rubrics and offer further guidance in the development process. 


Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the analysis of the products or processes of students' efforts (Brookhart, 1999). Scoring rubrics are typically employed when a judgment of quality is required and may be used to evaluate a broad range of subjects and activities. One common use of scoring rubrics is to guide the evaluation of writing samples. Judgments concerning the quality of a given writing sample may vary depending upon the criteria established by the individual evaluator. One evaluator may heavily weigh the evaluation process upon the linguistic structure, while another evaluator may be more interested in the persuasiveness of the argument. A high-quality essay is likely to have a combination of these and other factors. Developing a predefined scheme for the evaluation process reduces the subjectivity involved in evaluating an essay. 

Figure 1 displays a scoring rubric that was developed to guide the evaluation of student writing samples in a college classroom (based loosely on Leydens & Thompson, 1997). This is an example of a holistic scoring rubric with four score levels. Holistic rubrics will be discussed in detail later in this document. As the example illustrates, each score category describes the characteristics of a response that would receive the respective score. By having a description of the characteristics of responses within each score category, the likelihood that two independent evaluators would assign the same score to a given response is increased. This concept of examining the extent to which two independent evaluators assign the same score to a given response is referred to as "rater reliability." 


Writing samples are just one example of performances that may be evaluated using scoring rubrics. Scoring rubrics have also been used to evaluate group activities, extended projects and oral presentations(e.g., Chicago Public Schools, 1999; Danielson, 1997a, 1997b; Schrock, 2000; Moskal, 2000). They are equally appropriate to the English, mathematics and science classrooms (e.g., Chicago Public Schools, 1999; State of Colorado, 1999; Danielson, 1997a, 1997b; Danielson & Marquez, 1998; Schrock, 2000). Both pre-college and college instructors use scoring rubrics for classroom evaluation purposes (e.g., State of Colorado, 1999; Schrock, 2000; Moskal, 2000; Knecht, Moskal & Pavelich, 2000). Where and when a scoring rubric is used does not depend on the grade level or subject, but rather on the purpose of the assessment. 

Meets Expectations for a First Draft of a Professional Report 

* The document can be easily followed. A combination of the following are apparent in the document: effective transitions are used throughout, a professional format is used, the graphics are descriptive and clearly support the document's purpose. 

* The document is clear and concise and appropriate grammar is used throughout. 


* The document can be easily followed. A combination of the following are apparent in the document: basic transitions are used, a structured format is used, some supporting graphics are provided, but are not clearly explained. 

* The document contains minimal distractions that appear in a combination of the following forms: flow in thought, graphical presentations, grammar/mechanics. 

Needs Improvement 

* Organization of document is difficult to follow due to a combination of the following: inadequate transitions, rambling format, insufficient or irrelevant information, ambiguous graphics. 

* The document contains numerous distractions that appear in a combination of the following forms: flow in thought, graphical presentations, grammar/mechanics. 


* There appears to be no organization of the document's contents. 

* Sentences are difficult to read and understand. 

Scoring rubrics are one of many alternatives available for evaluating student work. For example, checklists may be used rather then scoring rubrics in the evaluation of writing samples. Checklists are an appropriate choice for evaluation when the information that is sought is limited to the determination of whether specific criteria have been met. Scoring rubrics are based on descriptive scales and support the evaluation of the extent to which criteria has been met. 

The assignment of numerical weights to sub-skills within a process is another evaluation technique that may be used to determine the extent to which given criteria has been met. Numerical values, however, do not provide students with an indication as to how to improve their performance. A student who receives a "70" out of "100" may not know how to improve his or her performance on the next assignment. Scoring rubrics respond to this concern by providing descriptions at each level as to what is expected. These descriptions help students understand the basis for their scores and what they need to do to improve their future performances. 

Whether a scoring rubric is an appropriate evaluation technique is dependent upon the purpose of the assessment. Scoring rubrics provide at least two benefits in the evaluation process. First, they support the examination of the extent to which the specified criteria has been reached. Second, they provide feedback to students concerning how to improve their performances. If these benefits are consistent with the purpose of the assessment, than a scoring rubric is likely to be an appropriate evaluation technique. 


Several different types of scoring rubrics are available. Which variation of the scoring rubric should be used in a given evaluation is also dependent upon the purpose of the evaluation. This section describes the differences between analytic and holistic scoring rubrics and between task-specific and general scoring rubrics. 
Analytic versus Holistic 

In the initial phases of developing a scoring rubric, the evaluator needs to determine the evaluation criteria. For example, two factors that may be considered in the evaluation of a writing sample are whether appropriate grammar is used and the extent to which the given argument is persuasive. An analytic scoring rubric, much like the checklist, allows for the separate evaluation of each of these factors, Each criterion is scored on a different descriptive scale (Brookhart, 1999). 

The rubric in Figure 1 could be extended to include a separate set of criteria for the evaluation of the persuasiveness of the argument. This extension would result in an analytic scoring rubric with two factors, quality of written expression and persuasiveness of the argument, each of which would receive a separate score. Occasionally, numerical weights are assigned to the evaluation of each criterion. As discussed earlier, the benefit of using a scoring rubric rather than weighted scores is that scoring rubrics provide a description of what is expected at each score level. Students may use this information to improve their future performance. 

Occasionally, it is not possible to separate an evaluation into independent factors. When there is an overlap between the criteria set for the evaluation of the different factors, a holistic scoring rubric may be preferable to an analytic scoring rubric. In a holistic scoring rubric, the criteria are considered together on a single descriptive scale (Brookhart, 1999). Holistic scoring rubrics support broader judgments concerning the quality of the process or product. 

Choosing an analytic scoring rubric does not eliminate the possibility of a holistic factor. A holistic judgment may be built into an analytic scoring rubric as one of the score categories. One difficulty with this approach is that overlap between the criteria set for the holistic judgment and the other evaluated factors cannot be avoided. When one of the purposes of the evaluation is to assign a grade, this overlap should be carefully considered and controlled. The evaluator should determine whether the overlap results in certain criteria being weighted more than was originally intended. In other words, the evaluator needs to be careful that the student is not unintentionally penalized severely for a given mistake. 

General versus Task-Specific 

Scoring rubrics may be designed for the evaluation of a specific task or the evaluation of a broader category of tasks. If the purpose of a given course is to develop a student's oral communication skills, a general scoring rubric may be developed and used to evaluate each of the oral presentations given by that student. This approach would allow the students to use the feedback they acquired from the last presentation to improve their performance on the next presentation. 

If each oral presentation focuses upon a different historical event and the purpose of the assessment is to evaluate the students' knowledge of the given event, a general scoring rubric for evaluating a sequence of presentations may not be adequate. Historical events differ in both influencing factors and outcomes. In order to evaluate the students' factual and conceptual knowledge of these events, it may be necessary to develop separate scoring rubrics for each presentation. A "task-specific" scoring rubric is designed to evaluate student performances on a single assessment event. 

Scoring rubrics may be designed to contain both general and task specific components. If the purpose of a presentation is to evaluate students' oral presentation skills and their knowledge of the historical event that is being discussed, an analytic rubric could be used that contains both a general component and a task specific component. The oral component of the rubric may consist of a general set of criteria developed for the evaluation of oral presentations; the task specific component of the rubric may contain a set of criteria developed with the specific historical event in mind. 


Brookhart, S. M. (1999). The Art and Science of Classroom Assessment: The Missing Part of Pedagogy. ASHE-ERIC Higher Education Report (Vol. 27, No.1). Washington, DC: The George Washington University, Graduate School of Education and Human Development. 

Chicago Public Schools (1999). Rubric Bank. [Available online at: Rubric Bank/rubric_bank.html]. 

Danielson, C. (1997a). A Collection of Performance Tasks and Rubrics: Middle School Mathematics. Larchmont, NY: Eye on Education Inc. 

Danielson, C. (1997b). A Collection of Performance Tasks and Rubrics: Upper Elementary School Mathematics. Larchmont, NY: Eye on Education, Inc. 

Danielson, C. & Marquez, E. (1998). A Collection of Performance Tasks and Rubrics: High School Mathematics. Larchmont, NY: Eye on Education, Inc. 

ERIC/AE (2000a). Search ERIC/AE draft abstracts. [Available online at:]. 

ERIC/AE (2000b). Scoring Rubrics - Definitions & Construction [Available online at:]. 

Knecht, R., Moskal, B. & Pavelich, M. (2000). The Design Report Rubric: Measuring and Tracking Growth through Success, Paper to be presented at the annual meeting of the American Society for Engineering Education. 

Leydens, J. & Thompson, D. (August, 1997), Writing Rubrics Design (EPICS) I, Internal Communication, Design (EPICS) Program, Colorado School of Mines. 

Moskal, B. (2000). Assessment Resource Page. [Available online at:]. 

Schrock, K. (2000). Kathy Schrock's Guide for Educators. [Available online at:]. 

State of Colorado (1998). The Rubric. [Available online at: #writing]. 

Library Reference Search

This site is (c) 2003-2004.  All rights reserved.

Popular Pages

More Info