Site Links



Search for ERIC Digests







Privacy Policy 

ERIC Identifier: ED324766 
Publication Date: 1990-00-00 
Author: Beswick, Richard 
Source: ERIC Clearinghouse on Educational Management Eugene OR. 

Evaluating Educational Programs. ERIC Digest Series Number EA 54. 

Program evaluation has long been a useful technical tool for determining if programs are meeting their stated goals. Specialists submit reports that help administrators to decide changes in curriculum content or direction. 

In recent years program evaluators have taken on an expanded role because their experience can be of value in every stage of the development of the program. This Digest introduces the reader to the scope of evaluation and the changing roles evaluators are asked to play in the school district. 


Every area of school curriculum is designed with certain goals in mind. A program evaluation measures the outcome of a program based on its student-attainment goals, level of implementation, and external factors such as budgetary constraints and community support. 

Three categories of instructional program evaluation are described by Bruce Wayne Tuckman (1985). "Formative evaluation" is an internal function that feeds results back into the program to improve an existing educational unit; this kind of evaluation is used frequently by teachers and school administrators to compare outcomes with goals. Attainment can be measured and procedures modified over time. 

"Summative evaluation" exists for the purpose of demonstration and documentation. Various ways of achieving similar goals can be compared. Summative evaluations help school districts analyze their unique characteristics and choose the program that will best achieve their pedagogical goals. An example is the evaluation of the adaptability and success in the work force of students who have emerged from a program. 

"Ex post facto evaluation" is a study over time. It attempts to determine if new programs, launched without readily predictable results, are achieving the desired goals. Here the data generated by continuous analysis are compared over time and, when available, compared with data of similar pilot programs. Both longitudinal (comparison of results over time) and cross-sectional (comparison of different student groups) results give evaluators the data to recommend improvement or termination. 


The first and most important issue in evaluation--how well students achieve mastery of new facts and skills--can often be measured by standardized tests. Verifications of reliability and validity are the litmus tests of these standardized evaluation tools. Reliability is the achievement of consistency in results. Consistency is measured in several ways: by comparing test results over time (giving the same test at intervals), by grade level expectations, and by national percentile rankings. Validity is the degree to which a test actually measures what it claims to measure, that is, the successful appropriation of intended subject matter. 

However, standardized testing involves a plethora of statistical uncertainties that have led some program evaluators to adopt other techniques to measure student attainment. Several alternative testing methods are being used: (1) standardized interviews allow students' responses to be compared and summarized; (2) direct tests (sometimes verbal) such as reading and math demonstrations enable teachers to gauge strengths and weaknesses and determine competency beyond mere right and wrong answers; and (3) students' notes, art work, and other material can be inspected for evidence of mastery. 

Edward F. DeRoach (1988) thinks that relying on an array of achievement, literacy, and minimal competency testing overemphasizes cognitive-achievement factors while disregarding affective-aesthetic development. He suggests using a program evaluation profile that reveals less tangible values such as: (1) program description that evaluates the nature of the community and the cultural/occupational background of parents; (2) program objectives that would measure performance in American history, for example, by involvement in school political activity or community service; (3) program content that ranges from knowledge of the facts to facility with placing information in larger contexts; and (4) processes that measure listening, questioning, summarizing, solving, and creating skills, as well as social skills such as tolerance, respect, and fairness to others. It remains unclear whether such "performance-based" assessments can be usefully compared across wide-ranging student populations. 


The role of citizen judgments in program evaluation was the focus of four studies conducted by the Northwest Regional Educational Laboratory in Portland, Oregon. Nick L. Smith (1983) notes the growing pressure for citizens and their representatives (school boards) to participate in school planning and review activities. Based on the American tradition of local control of education, it is thought that increased parental participation on boards developing new educational philosophies and innovative curricula would make school district programs more responsive to local ideological, economic, and cultural values. 

The study concluded that citizen judgments must be used judiciously to avoid bias, but that such judgments can be predictive of community responsiveness and receptivity to future collaboration. Program evaluators have paid more attention to political factors in recent years as evaluation has become a stronger force in program design. Hence, attention to public sentiment needs to be a high priority. 


For principals and superintendents, the purpose of program evaluation is to provide information to help them make decisions regarding programs. In general, principals feel that the benefits of evaluations are minimal because of their inability to measure program components that are of real importance, or because principals' own proximity to the everyday realities of the educational process gives them what they feel is a better basis for understanding needs and implementing change. Superintendents tend to be more positive about the value of program evaluation. In particular, evaluations that reported deficiencies and discussed possible solutions were highly rated. Second in importance are personal meetings with evaluation personnel. 

In small schools, the missing element in evaluations seems to be the attempt to make such studies systematic, purposive, cyclical, comprehensive, and well-communicated (James R. Sanders 1988). Sanders suggests that a Program Review Committee (PRC), composed of the superintendent, principal, grade level chairperson, and an educational specialist, be established. Each year the committee should conduct a review of one or two programs, so that each program receives careful scrutiny once every five years. 


According to Jody L. Fitzpatrick (1988), the job of the evaluator is expanding from technical roles to political and advisory roles. In innovative programs, defined as those still in a research and development phase, evaluators help identify goals and develop a strategy for accomplishing these goals. 

Another new role for the evaluator is translating policy questions developed by school boards and legislators into the more precise questions of program evaluation. In this role, the evaluator helps fashion new and innovative programs with features that are readily measurable. Once pilot programs are begun, the evaluator then has the opportunity to determine how fully the program was implemented before evaluating its effectiveness. According to Fitzpatrick, evaluation questions imply certain design decisions. Besides content, these questions can help determine the parameters of cost, time, and the availability of professional personnel. 

The program manager can monitor the innovative program through the oral briefings and written reports of the program evaluator. To be effective, communication should be ongoing and not limited to a final report at the end of the year. This makes the reporting of evaluation findings to the state-level policy makers more sensitive and precise. Thus, the use of an evaluator as program partner is effective at every stage of program development for integrating differing levels of understanding and shifts in accountability. 


DeRoche, Edward F. An Administrator's Guide for Evaluating Programs and Personnel: An Effective Schools Approach. Newton, MA: Allyn and Bacon, Inc., 1987. 319 pages. ED 283 242. 

Fitzpatrick, Jody L. "Roles of the Evaluator in Innovative Programs: A Formative Evaluation." Evaluation Review 12,4 (August 1988):449-61. EJ 381 144. 

Hansen, Joe B., and Walter E. Hathaway. "Setting the Evaluation Agenda: The Policy-Practice Cycle." Paper presented at AERA, New Orleans, LA, April 5-9, 1988. 42 pages. ED 293 862. 

King, Jean A., and Bruce Thompson. "How Principles, Superintendents View Program Evaluation."n." NASSP Bulletin 67,459 (Jan 1983): 46-52. EJ 274 300. 

Lazarus, Mitchell. Evaluating Educational Programs. Arlington, VA: American Association of School Administrators, 1982. 79 pages. ED 266 414. 

Sanders, James R. "Approaching Evaluation in Small Schools." ERIC Digest Series. Las Cruces, NM: ERIC Clearinghouse on Rural Education and Small Schools, 1988. 13 pages. ED 296 816. 

Smith, Nick L. "Citizen Involvement in Evaluation: Empirical Studies." Studies in Educational Evaluation 9,1 (1983):105-17. EJ 287 582. 

Tuckman, Bruce Wayne. Evaluating Instructional Programs. 2nd ed. Rockleigh, NJ: Allyn and Bacon, Inc., 1985. 292 pages. ED 261 015. 


Library Reference Search Web Directory
This site is (c) 2003-2005.  All rights reserved.

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

| privacy