ERIC Identifier: ED480994
Publication Date: 2003/08/00
Author: Wenning, Richard; Herdman, Paul A.; Smith, Nelson; McMahon, Neal; Washington, Kadesha
Source: ERIC Clearinghouse on Urban Education, Institute for Urban and Minority Education
No Child Left Behind: Testing, Reporting, and Accountability. ERIC Digest.
In a major expansion of the federal role in education, the NoChild Left Behind Act of 2001 (NCLB) requires annual testing, specifies a method for judging school effectiveness, sets a timeline forprogress, and establishes specific consequences in the case of failure. As the use of standardized testing to measure school accountability has expanded, so has the list of arguments for excusing the low achievement of whole categories of students. While special education law provides for testing with “accommodations,” in practice it has pushed educators to focus more on procedural compliance. The achievement of language-minority students has often been overlooked or mismeasured as school districts lacked the skill or will to administer appropriate assessments.
This digest reviews how testing and reporting requirements will operate with respect to different groups of students and examines factors that could delay or dilute the guarantee of educational accountability in the academic achievement of all children.
Different States, Different Tests
Although the Act mandates annual testing for all states by 2005-2006, it does not provide federal standards for testing practices. Left to their own discretion, states have created a broad array of approaches. Some states test reading and math every year; others test those subjects at three or four-year intervals, and others test a variety of subjects in a variety of grades.
One critical difference in testing practices is whether states use norm-referenced or criterion-referenced tests. Norm-referenced tests assess a student’s broad knowledge, measuring performance against a relevant comparison group. Criterion-referenced tests measure specific skills in relation to pre-established standards of academic performance. Advocates of standards-based reform prefer criterion-referenced tests because they can be directly aligned to a given state’s standards. However, because they are generally individually designed for each state, they are far more expensive to create and produce results that are more difficult to compare.
Evolving Testing Patterns. While the Act mandates annual testing by 2005-2006, it does not explicitly require states to administer the same test from year to year. Thus, states like Louisiana and Maryland, which test students in grades three through eight with a mix of norm- and criterion-referenced tests, may technically be in compliance, yet produce results that lack consistency over time.
States have some flexibility as to what subjects are tested and when. Prior to 2005-2006, they must measure proficiency of mathematics and reading or language arts, and do this at least once during grades three through five, six through nine, and 10 through 12. By 2005-2006, states must measure student achievement annually against state academic and achievement standards in grades three through eight in mathematics and reading or language arts. Beginning in 2007-2008, states must also include science assessments at least once during each of these three grade spans. So, by 2007, students will be tested annually from grades 3 to 8 in readingand math, tested twice in the elementary grades in science, and then in reading, math, and science at least once in grades 10-12.
Definitions of “proficiency” can vary from state to state. Beginning in the 2002-2003 school year, every state must participate in biennial assessments of fourth- and eighth-grade reading and mathematics under the National Assessment of Education Progress (NAEP). Further, NAEP data will be used to compare results on state tests with performance on NAEP assessments (U.S. Department of Education, 2003).
Testing All Student Groups
NCLB extends federally mandated testing to a wider population by reaching all student groups, not just those served by Title I. Testing requirements cover all K-12 public school students, including those attending charter schools. Further, state assessments must be disaggregated within each state, local education agency (LEA), and school by student demographic subgroups, including:
• economically disadvantaged students;
This provision attempts to rectify distortions and variations masked by the widespread reliance on schoolwide averages. In the past, when states were given the discretion to make exemption decisions, the result was widespread exclusion of students with disabilities from large-scale state and national assessments.
Reasons for such exemptions ranged from a desire to protect students with disabilities from the stresses of testing, to an aversion to the difficulties of specialized test administration, to the desire to raise a school's average scores (Heubert and Hauser,1998).
Districts fearing misdiagnoses because of language barriers may allow such students to remain in English as a SecondLanguage (ESL) programs for the maximum three years allowed under most state laws before they are assessed. Of the nation’s 2.9 million students enrolled in programs for English Language learners, an estimated 184,000 have disabilities, according to the U.S. Department of Education (DOE) (Zehr, 2001). NCLB’s provisions clarifying the time frame for participation in ESL tracks, coupled with the expectation for 95 percent participation within student subgroups, should mitigate this problem.
NCLB unmistakably includes students with disabilities and LEP students under its testing and accountability provisions and reinforces prior federal requirements for reasonable accommodations needed to achieve that end.
Whose Scores Count
While all students must participate in state testing programs, not all students’ scores will count equally in the alignment of incentives for improving school performance.
Adequate Yearly Progress. The key question is whether scores are included in measuring “Adequate Yearly Progress,” or AYP. NCLB provides a new federal definition of AYP that is more specific than the 1994 reauthorization, while still preserving some state latitude:
• Each state, using data from the 2001-2002 school year, must establish
a baseline for measuring the percentage of students meeting or exceeding
the state’s proficiency level of academic achievement. The state must use
the higher of either the proficiency level of the state’s lowest-achieving
group or the proficiency level of the students at the 20th percentile in
A New Way of Reporting Scores
Reporting results. Beginning in the 2002-2003 school year, states must provide parents and the public with annual report cards, which include information on student achievement disaggregated by subgroups, as described above. Taken together, the AYP and reporting provisions provide a new level of transparency about school performance, enabling parents and educators to make accountability more than a slogan. Yet a closer look reveals two potentially significant concerns.
First, grade-level-specific performance does not need to be monitored; thus, schools can provide schoolwide averages across grades rather than reports for all student subgroups in each grade. Yet without such reporting, schools can focus their energies on grades with higher achieving students -- while ignoring grades with lower achieving students -- and still increase their school average.
A second and perhaps more serious concern is NCLB’s use of the schoolwide average of student proficiency as the yard stick of progress. Although results will be disaggregated by student groups, reliance on this measure may discourage use of “value-added” analytical methods, which measure the impact of a school on the progress of individual students over time. States have latitude in this area and there is reason for hope that such analytical methods will be used.
Nevertheless, because the new federal definition of AYP encourages the analysis of average proficiency levels across student groups, the progress of individual students could be lost. A problem for state and national policymakers, this weakness in NCLB may undermine its utility most seriously at the school and district level.When there is no annual measurement of individual student performance over time, educators lack important data needed to evaluate their own work - to understand the “value added” by their efforts. Comparisons of school wide averages can be misleading and uninformative when the composition of classes changes from one year to the next.
Arguably, the measurement of progress required by NCLB confuses the school building for the students. Without a focus on student progress over time, superintendents and state boards of education will be measuring the percentage of students at the proficient level and calculating the change from year to year, but the numbers will refer to the apples who were in the building last year versus the oranges there now.
Implementation and Enforcement. The state and federal record on this issue is not encouraging. A DOE study of Title I, released seven years after the passage of the Improving America'sSchools Act (IASA), found that, of the 34 states reviewed, 13 did not have adequate testing and accountability provisions for limited English proficient students; 10 had similar difficulties with disabled students; and 16 had difficulty in disaggregating the data as required (U.S. Department of Education, 2001). Moreover, while few states have met the requirements of IASA even now, no state education agencies have been financially penalized for not complying with the Elementary and Secondary Education Act (Robelen, 2001).
Congress and the Administration should be lauded for enacting legislation that is focused on standards of achievement. However, if no child is to be left behind, states will struggle to implement NCLB, causing tension over the federal enforcement role. Additionally, the DOE should move to expand and strengthen the quality of data collected for accountability purposes. By mandating annual testing of entire school populations, NCLB creates an opportunity, but not an obligation, to measure the progress made by cohorts of students over time. It is Congress’s obligation to back up this opportunity with enough funds so that states may develop longitudinal data systems.
Heubert, J.P. & Hauser, R.M., Eds. (1998). High stakes: Testing fortracking, promotion and graduation. Washington DC: National Research Council. (ED 467 572)
High standards for all students: A report from the National Assessment of Title I on progress and challenges since the 1994 reauthorization. (2001). Washington, DC: U.S. Department of Education,Office of the Under Secretary, Planning and Evaluation Service. (ED 457 280)
No Child Left Behind: A parents guide. (2003). Washington, DC: U.S. Department of Education. Available: http://www.nclb.gov/next/faqs/testing.html
Robelen, E.W. (2001, November 28). States sluggish on execution of 1994 ESEA. Education Week, pp. 1, 26, 27. Available: http://www.edweek.com/ew/newstory.cfm?slug=13com-ply.h21.
Zehr, M.A. (2001, November 7). Bilingual students with disabilities get special help. Education Week, pp. 1, 22, 23. Available: h t t p : / / w w w. e d w e e k . o r g / e w / e w s t o r y. c f m ?slug=10clark.h21&keywords=bilingual.
This article is adapted from No Child Left Behind: Who Is Included in
New Federal Accountability Requirements? (ED 469 962), which was prepared
for“Will No Child Be Left Behind? The Challenges of Making This Law Work,”
a conference sponsored by the Thomas B. Fordham Foundation.
Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit. Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC. No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.