Scoring Rubrics Part II: How? ERIC Digest.

by Moskal, Barbara M. 

An earlier Digest, "Scoring Rubrics Part I: What and When?", described the different types of scoring rubrics and explained why scoring rubrics are useful. This purpose of the Digest is to provide a process for developing scoring rubrics. This Digest concludes with a discussion of additional resources that provide examples of scoring rubrics and further guidance in the rubric development process. 


The first step in developing a scoring rubric is to clearly identify the qualities that need to be displayed in a student's work to demonstrate proficient performance (Brookhart, 1999). The identified qualities will form the top level or levels of scoring criteria for the scoring rubric. The decision can then be made as to whether the information that is desired from the evaluation can best be acquired through the use of an analytic or holistic scoring rubric. If an analytic scoring rubric is created, then each criterion is considered separately as the descriptions of the different score levels are developed. This process results in separate descriptive scoring schemes for each evaluation factor. For holistic scoring rubrics, the collection of criteria is considered throughout the construction of each level of the scoring rubric, and the result is a single descriptive scoring scheme. 

After defining the criteria for the top level of performance, the evaluator may choose to define the criteria for lowest level of performance. What type of performance would suggest a very limited understanding of the concepts that are being assessed? The contrast between the criteria for top-level performance and bottom-level performance is likely to suggest appropriate criteria for middle-level of performance. This approach would result in three score levels. 

If greater distinctions are desired, then comparisons can be made between the criteria for each existing score level. The contrast between levels is likely to suggest criteria that may be used to create score levels that fall between the existing score levels. This comparison process can be used until the desired number of score levels is reached or until no further distinctions can be made. If meaningful distinctions between the score categories cannot be made, then additional score categories should not be created (Brookhart, 1999). It is better to have a few meaningful score categories than to have many score categories that are difficult or impossible to distinguish. 

Each score category should be defined using descriptions of the work rather then judgments about the work (Brookhart, 1999). For example, "Student's mathematical calculations contain no errors," is preferable over, "Student's calculations are good." The phrase "are good" requires the evaluator to make a judgment, whereas the phrase "no errors" is quantifiable. In order to determine whether a rubric provides adequate descriptions, another teacher may be asked to use the scoring rubric to evaluate a subset of student responses. Differences between the scores assigned by the original rubric developer and the second scorer will suggest how the rubric may be further clarified. 


Currently, there is a broad range of resources available to teachers who wish to use scoring rubrics in their classrooms. These resources differ both in the subject that they cover and the level that they are designed to assess. The examples provided below are only a small sample of the information that is available. 

For K-12 teachers, the state of Colorado (1998) has developed an online set of general, holistic scoring rubrics that are designed for the evaluation of various writing assessments. The Chicago Public Schools (1999) maintain an extensive electronic list of analytic and holistic scoring rubrics that span the broad array of subjects represented throughout K-12 education. For mathematics teachers, Danielson has developed a collection of reference books that contain scoring rubrics that are appropriate to the elementary, middle school and high school mathematics classrooms (1997a, 1997b; Danielson & Marquez, 1998). 

Resources are also available to assist college instructors who are interested in developing and using scoring rubrics in their classrooms. Kathy Schrock's Guide for Educators (2000) contains electronic materials for both the pre-college and the college classroom. In The Art and Science of Classroom Assessment: The Missing Part of Pedagogy, Brookhart (1999) provides a brief, but comprehensive review of the literature on assessment in the college classroom, including a description of scoring rubrics and why their use is increasing in the college classroom. Moskal (1999) has developed a web site that contains links to a variety of college assessment resources, including scoring rubrics. 

The resources described above represent only a fraction of those that are available. The ERIC Clearinghouse on Assessment and Evaluation [ERIC/AE] provides several additional useful web sites. One of these, Scoring Rubrics Definitions & Constructions (2000b), specifically addresses questions that are frequently asked with regard to scoring rubrics. This site also provides electronic links to web resources and bibliographic references to books and articles that discuss scoring rubrics. For more recent developments within assessment and evaluation, a search can be completed on the abstracts of papers that will soon be available through ERIC/AE (2000a). This site also contains a direct link to ERIC/AE abstracts that are specific to scoring rubrics. 

Search engines that are available on the web may be used to locate additional electronic resources. When using this approach, the search criteria should be as specific as possible. Generic searches that use the terms "rubrics" or "scoring rubrics" will yield a large volume of references. When seeking information on scoring rubrics from the web, it is advisable to use an advanced search and specify the grade level, subject area and topic of interest. If more resources are desired than result from this conservative approach, the search criteria can be expanded. 


Brookhart, S. M. (1999). The Art and Science of Classroom Assessment: The Missing Part of Pedagogy. ASHE-ERIC Higher Education Report (Vol. 27, No.1). Washington, DC: The George Washington University, Graduate School of Education and Human Development. 

Chicago Public Schools (1999). Rubric Bank. [Available online at: and_Rubrics/Rubric_Bank/rubric_bank.html]. 

Danielson, C. (1997a). A Collection of Performance Tasks and Rubrics: Middle School Mathematics. Larchmont, NY: Eye on Education Inc. 

Danielson, C. (1997b). A Collection of Performance Tasks and Rubrics: Upper Elementary School Mathematics. Larchmont, NY: Eye on Education Inc. 

Danielson, C. & Marquez, E. (1998). A Collection of Performance Tasks and Rubrics: High School Mathematics. Larchmont, NY: Eye on Education Inc. 

ERIC/AE (2000a). Search ERIC/AE draft abstracts. [Available online at:]. 

ERIC/AE (2000b). Scoring Rubrics - Definitions & Construction [Available online at:]. 

Knecht, R., Moskal, B. & Pavelich, M. (2000). The Design Report Rubric: Measuring and Tracking Growth through Success, Paper to be presented at the annual meeting of the American Society for Engineering Education. 

Leydens, J. & Thompson, D. (August, 1997), Writing Rubrics Design (EPICS) I, Internal Communication, Design (EPICS) Program, Colorado School of Mines. 

Moskal, B. (2000). Assessment Resource Page. [Available online at: Academic/assess/Resource.htm]. 

Schrock, K. (2000). Kathy Schrock's Guide for Educators. [Available online at:]. 

State of Colorado (1998). The Rubric. [Available online at: asrubric.htm #writing]. 

This Digest originally appeared as part of Moskal, Barbara M.(2000). Scoring Rubrics: What, When and How? Practical Assessment, Research & Evaluation, 7(3). [Available online:]. 

Library Reference Search

This site is (c) 2003-2004.  All rights reserved.

Popular Pages

More Info