ERIC Identifier: ED330064
Publication Date: 1991-00-00 
Author: Peterson, David 
Source: ERIC Clearinghouse on Educational Management Eugene OR. 

Evaluating Principals. ERIC Digest Series Number 60. 

Principals are often in limbo. They work in schools, but they are not teachers. They are educational managers, but they often have little contact with other managers. 

Given principals' ambiguous status, it is not surprising that they often receive only perfunctory evaluations. Yet effective principals are essential to effective schools. Their development "and a district's health" depends on regular and thoughtful assessment. 

WHY SHOULD PRINCIPALS BE EVALUATED?

Principal evaluation brings many benefits. Kathy Weiss (1989) notes that it encourages communication within organizations, facilitates mutual goal setting by principals and superintendents, sensitizes evaluators to principals' needs, and motivates principals to improve. Some 97 percent of the administrators in her study agreed that the process of evaluation had encouraged communication between principals and superintendents, and 88 percent agreed that the principals had improved as a result. 

In general, principal evaluation is of two broad types: formative and summative. Formative evaluation is relatively informal and is geared toward helping principals improve. Summative evaluation is more structured. Its goal is to precisely evaluate performance, and it is often used to facilitate decisions over compensation or tenure. 

According to Stephen Peters and Naida Tushnet Bagenstos (1988), in the late 1980s about three-quarters of states had mandated or planned to mandate the practice. Fifteen years earlier, only a few states required a principal evaluation program. 

WHAT ARE SOME COMMON PROBLEMS OF PRINCIPAL EVALUATION?

In spite of its growing popularity, principal evaluation often receives short shrift, due, in part, to confusion and misperception about the purpose of evaluation and the formation and application of evaluation criteria. 

A survey by Daniel Duke and Richard Stiggins (1985) found that nearly three-quarters of supervisors and principals are either completely or reasonably satisfied with their principal evaluation systems. Yet the same survey showed that superintendents often perceive the evaluations as being more thorough than the principals do, and that only a handful of districts have clearly defined performance levels. Many schools also rely on standardized checklist ratings that are not tailored to a particular school's needs (McCurdy 1983). 

Considerable confusion often exists about the purpose of principal evaluation (Peters and Bagenstos 1988). 

In addition, confusion over evaluation criteria vitiates many evaluation projects. William Harrison and Kent Peterson (1986) note that sampling of performance is often spotty and that principals frequently are unsure how assessment criteria are weighted. Only 58 percent of the principals they surveyed said that the expectations for their performance had been made clear prior to each year's evaluation. Indeed, the same survey revealed that superintendents define principals largely as instructional leaders, whereas principals tend to believe that their superintendents perceive them largely as administrators. 

Such studies underscore the need for more effective principal evaluation, but assessment has its costs. Duke and Stiggins (1985) point out that superintendents often feel unable to use such evaluations, since they lack the money to reward good principals or the power to terminate poor ones. Ronald Lindahl (1986) found that principals who have enjoyed high reputations often resist a more systematic program of evaluation, apparently fearing that they have little to gain and much to lose by the process. Ambitious assessment programs can also cost time and money. Hence, Peters and Bagenstos (1988) suggest that school districts define precisely what they hope to gain by principal evaluation and that they resist the urge to overstep that definition. 

WHAT ARE THE FIRST STEPS IN DESIGNING A PRINCIPAL EVALUATION SYSTEM?

Principal evaluation works best when it is not simply imposed from above. Richard Manatt (1989) suggests starting with a stakeholder's meeting of no more than twenty-five people. Jerry Valentine (1987) proposes a committee of about a dozen people, one half of them principals. That committee assesses other principal evaluation programs with the aid of a consultant, drafts a plan and submits it to the principals for amendment, and then sends the revised plan to the school board. An inservice session can familiarize principals with the evaluation process and defuse their anxiety over it. 

Principal evaluation does not exist in a vacuum. It relates to the statements of purpose, long-range plans, goals, and job descriptions that districts and schools may have already formulated. The urban school district described by Lindahl (1986) created precise job descriptions that "became the format for the summative evaluation instrument." The district incorporated individualized or formative goals by requiring each school to develop annual campus improvement plans and by requiring each principal to establish annual personal growth plans. Evaluation, then, is linked to both organizational and personal goals. Principals should be intimately involved in the goal-setting process, and they should certainly be fully informed of how the various goals will be weighted and assessed. This knowledge encourages principals to focus on the aspects of their job deemed most important. George Redfern (1986) describes a school district that bases its assessment of principals entirely on how well they attain mutually established goals. 

Although it cannot substitute for on-the-job evaluation, assessment centers can offer principals intensive observation and feedback. The National Association of Secondary School Principals sponsors several such centers (Anderson 1989). 

WHAT ARE THE TOOLS USED IN PRINCIPAL EVALUATION?

Valentine (1987) identifies a broad range of sources that can be collected to evaluate principals: attendance and test records, committee reports, newsletters, clippings, and time logs. He particularly urges supervisors to shadow principals, to take extensive notes on their actions and conversations. Data from these notes can then be transferred to the principal's evaluation form. Surveys of teachers, support staff, students, and parents can provide quantifiable evidence for key aspects of the principal's job. 

BellSouth Corporation has developed a particularly thorough survey instrument that teachers and others use to assess principals' behavior in eighty-nine different areas (Anderson). The program includes an extensive followup session to facilitate interpretation of the results. Unsolicited comments from a broad range of sources can also play a large role in documenting performance. 

Many people, then, can participate in principal evaluation. Those who are supervised by the principal should, of course, enjoy anonymity. The urban district Lindahl (1986) studied uses a mixture of survey questionnaires, self-evaluation, and evaluation teams. The teams consist of three people: the principal's supervisor, the director of secondary or primary education, and a peer selected by the principal. Team evaluations tend to be more balanced than solitary ones. Principals are often wary of peer evaluation (Anderson 1989, Duke and Stiggins 1985). 

The evaluation material can be used in several ways. Summative assessments are concerned with pay and tenure, but they can also serve as an instrument for remedial professional development. Formative and summative evaluations alike can be part of an ongoing process, not just an annual one. Anderson (1989), for example, advocates prompt postobservation feedback conferences. Valentine (1987) points out that serious deficiencies should be identified at these conferences and growth plans constructed for remedying them. Hence a principal's year-end evaluation should contain few surprises. 

WHAT ARE SOME MODELS FOR PRINCIPAL EVALUATION?

Ideal principal evaluation systems are cooperative and flexible. Principals in the Pitt County Schools of Greenville, North Carolina, work with their evaluators to establish individualized annual performance plans and goals (Redfern 1986). Those plans are accompanied by the state's assessment instrument, a standardized list of thirty-eight items that describe the principals' major responsibilities. 

In Oregon, North Clackamas School District uses two assessment systems for principals (Anderson 1989). The professional accountability program is for principals who have yet to complete three years in the district. Their evaluation instrument has eight job functions, each with several performance standards. The supervisor conducts at least three observations a year and provides narrative reports of each one. Principal-teacher conferences are also taped and reviewed. Those principals who do not meet performance standards are placed in a remedial cycle. 

North Clackamas's more experienced principals are in its professional development evaluation program. They establish personal goals for two to three years, and the district provides tuition, release time, and travel allowances to assist them. Comments one participant: "It takes you off the treadmill of being evaluated every year" (Anderson 1989). One principal designed a curricular mapping system to bring the district's testing and instructional programs into alignment. These principals receive summative evaluations every four years. Cash incentives of over $1,000 are available for those who meet their professional goals. 

North Clackamas School District uses formative, annual evaluations for its junior principals and employs surveys and frequent observations to measure performance in preselected areas. The formative evaluation for senior principals is less structured and encourages autonomous projects that will benefit both the principal and the district. 

RESOURCES

Anderson, Mark E. Evaluating Principals: Strategies To Assess Their Performance. Eugene, Oregon: Oregon School Study Council, April 1989. 53 pages. ED 306 672. 

Duke, Daniel L., and Richard J. Stiggins. "Evaluating the Performance of Principals: A Descriptive Study." Educational Administration Quarterly 21, 4 (Fall 1985): 71-98. EJ 329 615. 

Harrison, William C., and Kent D. Peterson. "Pitfalls in the Evaluation of Principals." Urban Review 18, 4 (1986): 221-35. EJ 356 378. 

Lindahl, Ronald A. "Implementing a New Evaluation System for Principals: An Experience in Planned Change." Planning & Changing 17, 4 (Winter 1986): 224-32. EJ 359 271. 

Manatt, Richard P. "Principal Evaluation Is Largely Wrongheaded and Ineffective." Executive Educator 11, 11 (November 1989): 22-23. EJ 398 899. 

McCurdy, Jack. The Role of the Principal in Effective Schools: Problems & Solutions. AASA Critical Issues Report. Arlington, Virginia: American Association of School Administrators, 1983. 97 pages. ED 254 900. 

Peters, Shephen, and Naida Tushnet Bagenstos. "State-Mandated Principal Evaluation: A Report on Current Practice." Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, Louisiana, April 5-9, 1988. 30 pages. ED 292 889. 

Redfern, George B. "Techniques of Evaluation of Principals and Assistant Principals: Four Case Studies." NASSP Bulletin 70, 487 (February 1986): 66-74. EJ 333 035. 

Valentine, Jerry W. "Performance/Outcome Based Principal Evaluation." Paper presented at the Annual Convention of the American Association of School Administrators, New Orleans, Louisiana, February 20-23, 1987. 43 pages. ED 281 317. 

Valentine, Jerry W., and Michael L. Bowman. "Audit of Principal Effectiveness: A Method for Self-Improvement." NASSP Bulletin 72, 508 (May 1988): 18-26. EJ 371 968. 

Weiss, Kathy. "Evaluation of Elementary and Secondary School Principals." Paper presented at the Annual Meeting of the American Association of School Administrators, Orlando, Florida, March 3-6, 1989. 11 pages. ED 303 904. 


Library Reference Search Web Directory
This site is (c) 2003-2005.  All rights reserved.

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.