Improving School Violence Prevention Programs through Meaningful Evaluation. ERIC Digest.

by Flannery, Daniel, J

Creating a school environment that is free of violence and drugs has become a public priority. Over time, the approach taken by schools to prevent violence evolved from quick fix interventions to social control strategies to sophisticated, multi-faceted,and long-term programs. The evolution occurred partly because of necessity: the historical approaches have not worked very well; an increase in student diversity, coupled with overcrowding, is exacerbating tension in schools; and school violence is escalating. 

There are now a great many different types of violence prevention programs. Some focus on working with individual children identified by teachers or peers as aggressive or at risk for school failure. Others combine a focus on individual and family risk by integrating school-based programs and work with parents and families, peers, or community members. Still other programs integrate an individual risk focus with attempts to change the school environment. Most strive both to increase student social competence and to reduce aggressive behavior. 

Many prevention programs are demonstrating signs of success, although schools frequently developed them without evidence of their potential, since empirical data on effectiveness is lacking; collecting such information has not been considered a valid use of scarce resources. Now, in order to increase the probability of program success, schools are rethinking this position. Also, as communities struggle to support their schools with decreased budgets, the need for additional monies has increased. But funders will not provide resources for programs, violence prevention included, without quality evaluation data demonstrating their effectiveness and promise. 

Determining what type of program, or combination of program components, is best for a particular school requires an assessment of the school's circumstances, student body,and resources. Assessments must continue as the program operates so that changes can be made to account for new developments and to improve outcomes. Such evaluation data can then be used to support requests for funding the program's continuation. This digest examines the role of evaluation in understanding what works in violence prevention, and offers some guidelines for conducting a basic evaluation of school-based violence prevention programs. 

NEED FOR EVALUATION

Most violence prevention efforts represent thoughtful responses to the escalation of fear, violence, and disorganization in the schools. Most are also offered in the absence of any evidence of their effectiveness (Kazdin, 1993). The lack of outcome effectiveness data is one of the major reasons why Congress has restricted reduced funding for drug and violence prevention in schools to programs which have empirically demonstrated behavior changes (Modzeleski, 1996). 

It is not that there is limited interest in determining the effectiveness of efforts to reduce school violence, but that there are often limited resources for doing so. A common observation from school administrators is that there is little justification for using scarce resources on evaluation when the funds could be spent on the provision of programs and services. How do you tell a parent that the fifth graders could not get a classroom program implemented because funds were needed for research? The answer is, in part, that schools will soon have no choice. In the face of consistently declining Federal support for safe school initiatives, schools will need to increase their appeals to alternative funding sources, such as businesses, families, and community foundations.These potential funders have begun to demand clear evidence that programs are effective, efficient, and cost beneficial. No longer will schools and other organizations receive "entitlement" money to implement programs at their discretion, independent of whether there exists evidence of the program's effectiveness at their own site. Even the U.S. Department of Education has recently demanded objective outcome evaluation data for the Title IV Safe Schools money allocation. 

There is, though, limited knowledge of what works best to reduce violence at school,and why, as well as limited energy to sustain long-term efforts to effect positive change. One way to gain knowledge and to implement strategies known to be effective, efficient, and cost beneficial is to implement only violence prevention strategies that have been empirically validated with thorough evaluations of program effectiveness. To do this, it is necessary to understand the role and importance of evaluation research in reducing school violence. Evaluation can inform effective implementation of a program; enable a school to demonstrate the value of the program to the community, to parents, and to potential funders; and influence the formation and implementation of social policy, both locally and nationally. This is not to say that evaluation is easy to do, cheap, or a magical panacea. 

TYPES OF EVALUATION

In any intervention program, the three most basic questions asked are: (1) What are the program's results and what does it change? (2) What program qualities make it work or be effective? and (3) Is the program cost effective? Four basic types of evaluation can be integrated into the existing structure of most schools and programs to address these questions. They are needs assessment, outcome evaluation, process or monitoring evaluation, and cost-benefit analysis. 

NEEDS ASSESSMENT

A needs assessment (or formative evaluation) helps a school determine its needs regarding violence reduction and prevention. Many schools might skip this first type of evaluation, believing that knowing they need to do something to reduce violence is sufficient. However, asking several questions first might help a school develop a more effective long-term strategy. For example: 

WHAT IS THE NATURE AND PREVALENCE OF VIOLENCE AND VICTIMIZATION AT THE SCHOOL OR INTHE NEIGHBORHOOD?

Considering school violence asbehavior occurring along a continuum from aggression toviolence is important because limiting the focus to serious actsof violence (those resulting in suspensions or detentions) doesnot fully capture the nature and extent of school crime andvictimization (Hanke, 1996). While people are disturbed byincreasing rates of school-based homicides, these occurrencesconstitute a relatively small proportion of incidents at schoolcompared to property crimes, acts of assault or extortion, andthreats of physical harm.

WHAT IS THE IMPACT OF VIOLENCE ON CHILD ADJUSTMENT AND MENTAL HEALTH AND LEARNING?

Exposure to violence is not without consequence: 50 percent ofchildren exposed to trauma under age 10 develop psychiatricproblems later in life, including increased rates of anxiety anddepression. Children exposed to chronic violence are also morelikely to form disorganized attachment (e.g., breech delivery,preeclampsia, oxygen deprivation due to long delivery duration) when accompanied by early maternal rejection; and a child temperament characterized by impulsivity, high activity levels, inflexibility, difficulty with transitions, and easy frustration and distraction (Brier, 1995.) 

2. Limited intelligence, particularly verbal intelligence; low school achievement and lack of attachment to school; poor problem-solving and social skills; and a tendency to make cognitive misattributions and to have impaired social judgment (Moffitt, 1993; Lochman & Dodge, 1994). 

3. The early onset and stability of aggressive, antisocial behavior, beginning even at the kindergarten level (Loeber & Hay, 1994). 

4. Poor parenting, including maltreatment and abuse; neglect; rejection; frequent and harsh, but inconsistent and ineffective, punishment; parental criminal behavior, and living in a climate of hostility (Patterson & Yoerger, 1993). 

5. Exposure to violence, and victimization by violence, in school, community, or home(Widom, 1991; Singer, Anglin, Song, & Lunghofer, 1995). 

6. High exposure to violence in the media, which can cause acceptance and emulation of aggression; desensitization to violence and its consequences; and development of a "mean world syndrome," which increases fear of victimization and a felt need to protect oneself and mistrust others (Centerwall, 1992). 

Additional questions to be considered include: What are school costs for vandalism and discipline problems related to violence? and What is the extent of gang activity at school? Answering all of these questions will help a school choose appropriate components for its safety plan: does the plan need to include the installation of metal detectors and surveillance cameras, does it need to focus on developing prosocial competence in the youngest students, or both? Obviously a high school will have safety concerns different from an elementary or middle school, so the same safety plan will not be equally effective in all schools, in all contexts, and for all children. 

OUTCOME EVALUATION

The second type of evaluation is called outcome evaluation. It answers the question "what changed because of the intervention?" Did the program reduce the children's problem behavior, aggression, delinquency, or violence? Did the program increase student attendance and improve school grades? Did it result in reduced discipline visits to the principal's office? Did it result in increased social competence or improved social skills? All of these are appropriate outcome evaluation questions. Being clear about what the program is meant to address (and not address) is essential to measuring its effectiveness. Some popular programs may be effective in changing some problem behaviors, but may not result in decreased student violence. For example, a substance abuse prevention program may do little to reduce victimization by violence or the perpetration of violence, and teen pregnancy reduction is an important outcome, but it is not violence prevention. 

Certainly some of the factors that underlie most problem behaviors in children ares shared by intervention strategies: improving problem-solving and conflict resolution skills, increasing attachment to school and success at school, improving communication and social skills, etc. These are valuable targets of intervention for most students inmost schools. If they are the focus of the violence prevention intervention, they must be clearly explicated. The reasons why these are the outcomes and how they relate to reductions in aggressive behavior, conflict, or violence must also be clearly stated. This requires a clear understanding of the risk factors the school is attempting to ameliorate or the protective factors it is trying to promote. Clearly defining program goals and desired outcomes will go a long way toward establishing relevant and effective outcome assessments of the program's success and will help to identify possible limitations. 

PROCESS EVALUATION

The third type of evaluation is a process evaluation. Process evaluation attempts to address the question "what works best about our program and why does it work?" Is program effectiveness related to quality of teacher or staff training, the number of years an individual has been teaching, strong administration support for the program, scope of the program (i.e., school wide or confined to lessons in one classroom), or active parent involvement in program implementation and support? For example, Flannery and Torquati (1993), in an examination of a middle school substance abuse prevention program, found that teachers believed that parent involvement as volunteers in the classroom was the biggest factor in determining the program's success for students' more important than administrative support, quality of teacher training, and even than the teacher's own "buy in" of the program's importance and effectiveness. 

COST-BENEFIT ANALYSIS

The last type of basic evaluation is cost-benefit analysis. A cost-benefit evaluation answers the question "is the program cost effective?" It might include an assessment of how much the program costs to implement per student or school, or how much the program saves in other related costs (e.g., vandalism). One of the most intriguing and comprehensive cost-benefit evaluations was conducted recently by the RAND Corporation. Greenwood, Model, Rydell, and Chiesa (1996) examined the cost effectiveness of several crime prevention strategies involving early intervention in the lives of people at risk for pursuing a criminal career. Focusing on California, they contrasted the state's Three Strikes policy that mandates extended sentences for repeat offenders with four different approaches: (1) home visits by childcare professionals,beginning before birth and extending through the first two years of childhood, followed by four years of daycare; (2) parent training for families with aggressive or acting out children; (3) four years of cash and other graduation incentives for disadvantaged high school students; and (4) monitoring and supervising high school youth who had already exhibited delinquent behavior. All of the examined programs, with the exception of home visits and daycare, were appreciably more cost effective at reducing serious crimes than was the Three Strikes policy. Graduation incentives for disadvantaged youth proved the most cost-effective approach, averting nearly $260 million lost from s serious crimes compared to about $60 million for the Three Strikes option. These findings have serious implications for policy makers who believe that increased incarceration time for juvenile offenders will systematically and over time reduce the youth crime rate. 

EVALUATION METHODS

There are many techniques that schools can utilize as part of an evaluation strategy. Many different kinds of information are readily available to schools for low cost and effort. Potential sources of information include self-reports by the students, teachers, parents, and principals. Student reports about violence and victimization will be increasingly difficult to gather, however, given the increased attention to the protection of human subjects, particularly minors, in behavioral and medical research; research may still be conducted on these important topics, but the days of large surveys with thousands of students may be past. 

Most schools also collect archival data as part of their everyday operations (attendance, grades, conduct ratings on report cards, disciplinary contacts, suspension, weapons violations, visits to the nurse's office for treatment of injury, costs to repair vandalism and property destruction). Some additional archival data could also be collected that is not currently systematically recorded in most schools. These include visits to the principal's office for disciplinary action, and observational ratings of aggressive behavior in the classroom, lunchroom, and on the playground. These measures are two of the most accurate predictors of which young children are at increased risk for subsequent delinquent behavior and arrest for criminal activity as adolescents (Walker, Colvin, & Ramsey, 1995). Schools may also partner with local police or sheriff's departments to gather aggregate data on community crime and the nature or types of contacts children from their school have with the police. Of course, police officers should only report to schools substantial incidents of problem behavior, not random stops or checks of youth which do not result in any official action. 

What should be the strategy for collecting this information on program effectiveness? There are three basic components to any evaluation that will make the results more readily interpretable and valid. The first is collection of outcome data before the intervention is implemented. This information provides the school with a baseline of student behavior, grades, attendance, etc., from which change can later be determined. Report cards from previous grading periods constitute one example. 

The second is assessment, whenever possible, of a comparison group of students (or classrooms or schools) not exposed to the intervention. A comparison group (preferably very similar to the students in the intervention with respect to gender, age, risk status,etc.) will allow a determination of whether and how the intervention is effective for children in the program as opposed to those not in the program. 

For example, assume a school identifies 50 third graders at risk for school failure and delinquency. It collects baseline information on these students and then exposes them to an intensive 25-week curriculum aimed at improving their problem-solving and social skills and their academic achievement, and reducing their aggressive behavior. The school then collects information on the students immediately after the curriculum is finished. It finds that, indeed, these students are better problem solvers and are less aggressive. Unfortunately, the design does not provide a clear answer to an essential question: how does the school know that the observed changes resulted from the curriculum? Could it be that third graders, simply because they have matured overtime, have better social skills and are less aggressive over a 25-week period? 

Conversely, some programs that have been evaluated did not show significant reductions in aggressive behavior among some children, and the initial belief about them was that the programs were ineffective. It was not until recently that researchers began to demonstrate that many children experience increases in aggressive behavior over time. Thus, even if a program does not result in an appreciable decline in aggression, it may have a "blunting" effect in that participants do not experience the expected increase in aggression (Tolan, Guerra, & Kendall, 1995). Assessments of comparison groups of students not participating in violence prevention programs contributed to this realization. 

The third component of an effective evaluation design is random assignment of students to treatment groups or controls. This is the most difficult, practically and ethically, to achieve, and may not be possible in most "real world" situations. Random assignment of two equally deserving children, with similar assessments of both children, provides the strongest evidence that it was the treatment that caused any observed differences in a child's outcome. One strategy that has been used successfully is random assignment of students (or classrooms or schools) to treatment or control groups at the beginning of an evaluation, with eventual provision of the same treatment to the controls. This is easier to do if the unit of analysis is the school or classroom rather than the individual. If a whole school is in a comparison group, then all students in the school still receive the same services and attention that they always have. If the control is an individual student, it is harder to justify withholding treatment. This is especially true when the treatment may address a very serious, immediate, and potentially dangerous problem like violence. 

CONCLUSION

The point of evaluating a violence prevention program is to assess and improve its effectiveness. The goal, of course, is a school that has high expectations for student achievement and behavior and fosters their realization, promotes respect for diversity, and is safe. While different prevention needs require use of different interventions,those shown to be universally successful: 

*are instituted early, and are developmentally appropriate, comprehensive, and long-term; 

*develop student social competence; 

*improve the school climate through good organization, and increased student, staff,and parent attachment and participation; 

*take into account the impact of violence and victimization by violence; 

*integrate violence-related issues into teacher training; and 

*have a comprehensive evaluation program. 

REFERENCES

Brier, N. (1995). Predicting antisocial behavior in youngsters displaying poor academic achievement: A review of risk factors. Developmental and Behavioral Pediatrics, 16, 271-276. 

Centerwall, B.S. (1992). Television and violence. Journal of the American Medical Association, 267, 3059-3063. 

Eron, L.R., Gentry, J.H., & Schlegel, P. (1993). Reason to hope: A psychosocial perspective on violence and youth. Washington, DC: American Psychological Association. 

Flannery, D., & Torquati, J. (1993). An elementary school substance abuse prevention program: Teacher and administrator perspectives. Journal of Drug Education, 23(4),387-397. (EJ 477 163) 

Greenwood, P.W., Model, K.E., Rydell, P., & Chiesa, J. (1996). Diverting children from a life of crime: Measuring costs and benefits. Santa Monica, CA: RAND Corporation. 

Hanke, P.J. (1996). Putting school crime into perspective: Self-reported school victimizations of high school seniors. Journal of Criminal Justice, 24, 207-225. 

Kazdin, A.E. (1993). Interventions for aggressive and antisocial children. In L.D. Eron, J.H. Gentry, & P. Schlegel (Eds.), Reason to hope: A psychosocial perspective on violence and youth. Washington, DC: American Psychological Association. 

Lochman, J.E., & Dodge K.A. (1994, April). Social-cognitive processes of severely violent, moderately aggressive, and nonaggressive boys. Journal of Consulting and Clinical Psychology, 62, 366-374. (EJ 484 615) 

Loeber, R., & Hay, D.F. (1994). Developmental approaches to aggression and conduct problems. In M. Rutter & D.F. Hay (Eds.), Development through life: A handbook for clinicians (pp.288-516). Boston: Blackwell Scientific. 

Modzeleski, W. (1996). Creating safe schools: Roles and challenges, a Federal perspective. Education and Urban Society, 28(4), 412-423. (EJ 531 784) 

Moffitt, T.E. (1993). Life-course-persistent and adolescent-limited antisocial behavior: A developmental taxonomy. Psychological Review, 100, 674-701. 

Osofsky, J.D. (1997). Children in a violent society. NY: Guilford Press. 

Patterson, G.R., & Yoerger, K. (1993). Developmental models for delinquent behavior. In S. Hodgins (Ed.), Mental disorders and crime. Newbury Park, CA: Sage. 

Singer, M., Anglin, T., Song, L., & Lunghofer, L. (1995). Adolescents' exposure to violence and associated symptoms of psychological trauma. Journal of the American Medical Association, 273, 477-482. 

Tolan, P.H., Guerra, N.G., & Kendall, P.C. (1995). A developmental perspective on antisocial behavior in children and adolescents: Toward a unified risk and intervention framework. Journal of Consulting and Clinical Psychology, 63, 579-584. 

Walker, H.M., Colvin, G., & Ramsey, E. (1995). Anti-social behavior in school:Strategies and best practices. Pacific Grove, CA: Brooks/Cole. (ED 389 133) 

Widom, C.S. (1991). Does violence beget violence? A critical examination of the literature. Psychology Bulletin, 109, 130. 

Library Reference Search
 

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

Share
Popular Pages