ERIC Identifier: ED446345
Publication Date: 2000-09-00
Author: Hertling, Elizabeth
Source: ERIC Clearinghouse on
Educational Management Eugene OR.
Evaluating the Results of Whole-School Reform. ERIC Digest
Whole-school reform (also known as comprehensive school reform) is a process
that seeks to simultaneously change all elements of a school's operating
environment so those elements align with a central, guiding vision (Keltner
1998). The ultimate goal, of course, is to improve student performance.
Frustrated by unsuccessful piecemeal reforms and spurred by the financial
incentives of the federal Comprehensive School Reform Demonstration (CSRD)
program, which makes $50,000 annual grants available to qualifying schools,
educators are increasingly turning to whole-school reform to improve the
performance of their schools.
Does whole-school reform really work? So far, the results are modestly
positive. A 1999 study by American Institutes for Research found only 3 out of
the 24 whole-school reform models studied presented strong evidence that they
raised student achievement (AIR 1999). New American Schools (NAS) claims that
every one of their designs, when fully implemented, has improved schools'
attendance rates, parental involvement, and student performance. NAS adds, "Some
schools have not achieved the results they expected, and a few have not
experienced any improvement after adopting a design" (NAS 1999).
To determine whether its reform program is achieving the intended results, a
school must be able to conduct an effective evaluation of the reform practices.
"Schools that lack the ability to analyze their own results will always be at a
disadvantage," says NAS President John Anderson (1999).
Schools that do not evaluate their results are also at another distinct
disadvantage: To renew CSRD program funding, schools are required to evaluate
their whole-school reform model. This Digest examines ways that schools can
evaluate the results of their comprehensive reform program to determine what is
working and what needs to be changed.
WHAT IS REQUIRED BY CSRD TO RENEW FUNDING?
flexible design and keeping the design realistic are two primary goals of CSRD
evaluation (Clark and Dean 2000). State and local education agencies (SEAs and
LEAs) must evaluate the implementation of CSRD programs as well as measure the
results. The U.S. Department of Education (1999) advises SEAs and LEAs to
consider two main data sources in their evaluation of CSRD programs: student
performance and program implementation.
Performance measures should be aligned with the intended outcomes of
comprehensive reform programs implemented in the state and should produce data
that are both quantitative and qualitative. Evaluation should rely on the same
assessments that are used to assess all students against state standards and can
be supplemented by local or school-developed assessments of student performance.
Schools may also wish to examine other aspects of school performance such as
student attendance and parental involvement.
The need for program implementation data stems from the plethora of research
that demonstrates the important role of implementation in comprehensive school
reform's success. The U.S. Department of Education requires schools to track
stakeholder support, parental participation, continuous staff development, and
performance monitoring for implementation. Additional data should include the
use of external technical assistance in implementing the program, the sources of
the technical assistance, and the effectiveness of technical assistance (U.S.
Department of Education).
HOW CAN SCHOOLS PLAN FOR A COMPREHENSIVE EVALUATION?
key to this process is addressing key questions early in the program, so that
the evaluation process will reflect the needs, interests, issues, and resources
unique to the school. "Effective evaluations that produce useful information for
decisionmakers are not afterthoughts; they are integral to the program planning
and implementation processes from the outset," advise Cicchinelli and colleagues
Yap and colleagues suggest that schools ask themselves several questions
while planning their whole-school reform (1999). What does the school want to
accomplish overall? What must be done to achieve these goals? How will they
gauge progress toward their goals? How will evidence be gathered to demonstrate
progress toward the school's objectives? How will the evaluation results be
Key stakeholders can gather as a group to agree on the answer to each
question, or they can answer the questions separately before meeting to tabulate
the results. It is important that any differences of opinion be expressed and
considered, advises Hassel (1998).
According to Cicchinelli and colleagues, the evaluation process should remain
flexible and realistic in scope. Standards for evaluation such as the Program
Evaluation Standards established by the U.S. Joint Committee on Standards for
Education Evaluation or the Guiding Principles for Evaluation developed by the
American Evaluation Association may be helpful.
Involvement of key stakeholders in the evaluation process is crucial.
Stakeholders may include parents, community members, teachers, administrators,
boards of education, students, and others (Colorado Department of Education
While developing this preliminary list of evaluation tasks, administrators
should also estimate staff time and expertise needed, as well as other necessary
resources. These expenses should be weighed against the amount of available
resources to determine if schools will need to collaborate with other agencies
to obtain the needed staff time and/or resources (Cicchinelli and colleagues).
HOW SHOULD THE EVALUATION BE DESIGNED?
The manner in which
whole-school reform is implemented determines in large part its eventual
results. Therefore, the evaluation design must address these two components: how
well the program's implementation is working, and what concrete results it has
achieved (Yap and colleagues). Effective evaluation does not rely on a single
tool to collect data.
"No single surveyor or all-purpose data collection tool meets the school's
total information needs," cautions Policy Studies Associates (1998). Schools
should plan on combining standardized tests and surveys with qualitative methods
such as personal interviews and focus groups.
To assess program implementation, schools can review archival materials such
as student records, program plans, and implementation logs. Educators may also
want to conduct surveys or interviews with key stakeholders, as well as conduct
classroom observations to monitor changes in instructional practices
(Cicchinelli and colleagues).
To evaluate the concrete results of the program, schools often concentrate on
student achievement and performance. Comparability plays an important role here.
The Colorado Department of Education suggests including assessments that have a
common scoring system and allow for comparisons across schools, districts, and
states, such as the CSAP, The National Assessment of Educational Progress, the
Iowa Test of Basic Skills, and others. Policy Studies Associates reports that
many schools link their goals to broader state goals that are measured
periodically by their state's assessment programs. In this way, schools can
examine results for several purposes.
Schools should not rely solely on standardized test to evaluate student
achievement, the Colorado Department of Education warns. Schools should include
classroom assessments that provide additional information such as writing
samples, projects, experiments, speeches, demonstrations, and more.
WHAT ARE THE BARRIERS TO A SUCCESSFUL EVALUATION?
elements can derail an evaluation plan. The most common problem, according to
Yap and colleagues, is a lack of time: "Many teachers already feel overwhelmed,
and the thought of one more thing to do can be daunting." Districts that lack
resources for the evaluation may want to seek help from the program's developer
or funding source.
Key stakeholders in the process also may not have the skills or experience
needed to work cooperatively together. Or, they may be able to work together,
but may not have any training in practical program evaluation, leading to a lack
of understanding of how to use data to guide decisions (Yap and colleagues).
Lack of knowledge is not the only barrier to a successful evaluation. Many
educators fear evaluation, thinking that the data will be used against their
schools to expose inadequacies and jeopardize funding.
HOW SHOULD THE EVALUATION DATA BE USED?
"Once the hard work
of gathering data is done, the really hard work begins," say Cicchinelli and
colleagues. Obviously, the evaluation findings must be reported. But to create a
useful report, evaluators must tailor the data to the audience, select the
appropriate media to report the results, and deliver the findings in a timely
Cicchinelli advises administrators to format the evaluation results for ease
of use by all stakeholders. Principals might benefit from a computer-generated
summary of assessments disaggregated by student groups receiving different types
of instruction. School boards or state officials, however, might be more
interested in statistical progress reports with charts and graphs comparing
student performance data over the years.
Sharing results is not necessarily a one-time event. Yap and colleagues
suggest schools establish an ongoing process to communicate results of
evaluation to keep the school's community informed about the progress and
quality of the program. Policy Studies Associates recommend assessment at least
four times a year.
Educators should not forget the most important use of the data, however: to
improve the program. It may be helpful to break the data into categories such as
gender, ethnicity, student type, and grade level, so that schools can focus on
their strengths and weaknesses (Yap and colleagues). Based on the evaluation
results, educators can determine if changes in the program are necessary, if
results-based goals and benchmarks need to be refined, or if action strategies
need to be redesigned, replaced, or continued (Colorado Department of
American Institutes for Research. An Educator's
Guide to Schoolwide Reform. Arlington, Virginia: Educational Research Service,
1999. 141 pages.
Anderson, John. "Is It Working Yet?" Education Week (June 2, 1999): 30, 33.
Cicchinelli, Louis F., and colleagues. "Evaluating Comprehensive School
Reform Initiatives." Noteworthy Perspectives on Comprehensive School Reform.
Aurora, Colorado: McREL, 1999. Pages 41-47. ED 433 588.
Clark, Gail, and Ceri Dean. Comprehensive School Reform Demonstration: A
Summary of LEA Roundtables. Aurora, Colorado: McREL, 2000.
Colorado Department of Education. Schoolwide Programs: Preparing for School
Reform. Denver: Colorado Department of Education, 1998. 195 pages.
Hassel, Bryan. Comprehensive Reform: Making Good Choices. Oak Brook,
Illinois: North Central Regional Educational Laboratory, 1998. 63 pages.
Keltner, Brent R. Funding Comprehensive School Reform. Rand Issue Paper.
Santa Monica, California: Rand Corporation, 1998. 9 pages. ED 424 669.
New American Schools. Working Toward Excellence: Results from Schools
Implementing New American Schools Designs. New American Schools, 1999. 55 pages.
ED 420 896. www.naschools.org/resource/earlyind/99Results.pdf.
Policy Studies Associates. Implementing Schoolwide Programs: Vol. 1, An Idea
Book on Planning. Washington D.C.: Policy Studies Associates, 1998. 220 pages.
ED 423 615. www.edrs.com/default.cfm
U.S. Department of Education. Guidance on the Comprehensive School Reform
Demonstration Program. Washington, D.C.: U.S. Department of Education, 1999.
Yap, Kim, and colleagues. Evaluating Whole-School Reform Efforts: A Guide For
District and School Staff. Portland, Oregon: Northwest Regional Educational
Laboratory, 1999. 178 pages. -----