Site Links



Search for ERIC Digests


About This Site and Copyright


Privacy Policy

Resources for Library Instruction


Information Literacy Blog

ERIC Identifier: ED412297
Publication Date: 1996-12-00
Author: Flaxman, Erwin - Orr, Margaret
Source: ERIC Clearinghouse on Urban Education New York N.Y.

Determining the Effectiveness of Youth Programs. ERIC/CUE Digest No. 118.

Despite the increasing number of programs for youth now operating (such as school-to-work, mentoring, employment, tutoring, and recreational), very little is known about their quality or impact. Some evaluations, anecdotal evidence, and the impressions of the participants do, however, suggest that the programs are successful. Everyone believes that only good can come from them, that at the worst they will not accomplish all that they could, but that youth will be better for the experience, even if it has not markedly affected their lives. But if these programs are to achieve their highest potential, they need to be nurtured, managed well, and evaluated. Yet, program officials, policy makers, and the community have only limited information about how best to implement these programs and about which program features are the most essential and beneficial to students. Neither do they know how much the programs help students develop the career and academic orientation necessary to stay and succeed in school, go on to postsecondary and higher education, and be prepared for economically sound and personally gratifying work.

Given the choice to spend money to evaluate a youth program or to provide more services, most program administrators would not choose the evaluation. They would rather use the funds to reach more youth, recruit and train more staff, or provide more experiences for the youth. This is unfortunate. Ongoing assessment, feedback, and evaluation are as important to the health of a program as are its design and management. Ideally, they should be made part of the design and operation of a program at the outset, not as an afterthought under pressure from a funding agency or the community or to showcase the "success" of a program. The findings of an evaluation should be reviewed periodically and be the basis for modifying the program. This can be done through process and impact evaluations of the program.

This digest examines features of both process and impact evaluations in order to help officials take simple actions to assess the programs for which they are responsible. With proper planning, evaluation can be part of a program without having to compete for attention and resources with the services the programs are trying to provide.


Process evaluations provide information about the operation of the components of a program; impact evaluations explain whether they are yielding the intended outcomes. Process evaluations consider the components of a program; impact evaluation, the outcomes. These types of evaluation serve different purposes, but they necessarily depend on each other. The impact of a program can only be adequately determined by ascertaining whether and how well it has been implemented. In turn, learning the outcomes of a program sheds light on its operation, particularly if the intended results were not attained. And both types of evaluation can be used to refine the design and management of the program to better achieve the desired outcomes.


A process evaluation examines the design, implementation, and context of a program, and its data collection, analysis, and reporting activities.

ADHERENCE TO DESIGN. In a process evaluation the program staff must first determine whether a program design has been altered through its implementation. In truth, no design can be faithfully followed because local conditions over a period of time affect how it will be implemented. Some changes have little or no effect on the program model, and leave it basically intact. Some minor changes may even be beneficial. But some adaptations or changes can so severely alter a design during implementation that it no longer resembles the original design. Sometimes the program design itself can throw the program off course, and, thus, necessitate changes. For example, some features of the design may be too complicated to be easily implemented; or the goals of the program may be too ambitious or vague, possibly leaving the staff to misinterpret the intent. Sometimes the staff members, unprepared or not free to implement the design of a program, have to improvise the implementation.

Any evaluation of the adherence to the program design requires determining the core components of the model, the extent to which each component is actually implemented, the degree to which the clients and service providers participate in all or the appropriate aspects of the program, and the consistency of any changes in the design with the purposes and objectives of the program.

IMPLEMENTATION. Programs can be implemented either to increase the probability that the design will work or to reduce its chance for success. Data for documenting the implementation of a program design should indicate the role and extent of all the program participants, the training of the staff, and the time and resources allotted to the program. It is critical that all aspects of the implementation of the program be documented, from the point the model is put into practice, times and nature of any changes, to the completion of the program.

CONTEXT. Program officials and others sometimes do not realize that programs operate within a social and political context, which affect the outcomes. First, the characteristics of the program participants and the school and community can affect the implementation and outcomes of a program. For example, some youth are more amenable to a particular kind of program than others, and some parents are more supportive than others. Second, the preparation of the staff, the adequacy of financial resources, conflicting or competitive programs, and the social and political climate all affect a program. Further, program staff often must meet demands to implement several new programs at the same time. Finally, lack of funds may cause premature or compromised implementation of a program. In examining the effect of the context of the program on its implementation, however, it is important to distinguish between significant and negligible influences.

DATA COLLECTION. Data should be collected at key points throughout the life of a program. At the outset program officials and staff should design simple recordkeeping procedures and follow them regularly. Similarly, they should create a management information system to document the number and types of participants in the program, and track their retention and completion rates and their experiences in the program, including any changes in their use of services over time. These data can be easily obtained from interviews, focus groups, observations, journals and logs, and administrative records.

ANALYSIS AND REPORTING. Program officials need to establish periodic benchmarks and guidelines for collecting evaluation information and examining the results. Without these benchmarks the collected data are meaningless for analyzing problems and altering the course of a program. Ideally, this process should start by creating a summary of the development of the program and its initial implementation. Program administrators need to examine the critical issues governing the implementation of the program, like adherence to the program model (both ideal and necessary features) and the types of implementation problems encountered, including their causes and solutions. Reflecting on the program as a whole is critical because it allows program staff to find more inclusive solutions rather than making incremental decisions as problems arise, or having to defuse crises. It also allows them to intervene early before problems become endemic and difficult to solve.

Again, if time and resources allow, it is important to compare the experiences of the program with those of similar programs. The comparison is a useful way to find solutions to problems common to all the programs. It also helps to determine whether problems are a function of the model itself or whether the problems are local to the particular school or program setting.

In short, a process evaluation is a necessary management tool, which should be used more than once. It can be used to determine whether a program is being properly implemented or is drifting from its original intent, a hazard in any long term activity.


Although it is essential for program planners and officials to understand how well a program has been implemented, it is also important to demonstrate whether, how, and to what degree the program has affected the participants. The simplest evaluation question is: Did the program meet its objectives and yield the appropriate outcomes? Such an evaluation is critical to determine whether the program should continue in its current form. If it can be shown that the program has been effective, it will be easier to receive continuing financial, administrative, and community support. But to obtain the data to support claims about the effectiveness of a program, program officials must devise an appropriate evaluation design and administer it properly.

OUTCOMES AND THEIR MEASUREMENT. Ultimately, the programs need to improve the lives of the participating youth, whether by reducing their risky behavior, improving their academic achievement and progress, or helping them obtain and retain career jobs in growth industries. Measures of these outcomes must be tailored to the specific activities of the program or the program component being evaluated. Measures include the following: pre-program and post-program student grades, performance on standardized achievement tests, and attendance and promotion rate; students' self-reported confidence in their abilities, knowledge, skills, and success in school and later in life; students' self-reported and tested knowledge about education, jobs, careers, and the world of work; and teacher evaluations of students' academic performance and behavior. In addition, the impact of the program on students' subsequent academic performance and other behaviors can be determined by an examination of their records.

COMPARISONS. The significance of the outcomes for the participants and the contribution of the program model to these gains can only be determined by a comparison. This can be done in two ways. The first is a within-program comparison, the second is a comparison with groups that did not participate in the program.

The first comparison determines the degree of the program impact on selected outcomes among participating youth with different characteristics. The second type of evaluation determines whether the program or other factors are responsible for the outcomes. This comparison also reveals whether differences in the outcomes are attributable to differences in the program participants instead of in the program itself. Clearly, demographic differences in the groups being compared: age, gender, and socioeconomic status, and differences in their prior academic record: attendance, grade-point average, accumulated credits, and history of disciplinary referral, influence program outcomes. It is very difficult to determine whether differences in program outcomes can be attributed to the impact of the program when the youth being compared differ from one another too much.

Ideally, an evaluation should compare groups identical to each other, but this is not usually possible. A larger number of youth than usual would have to ask or be recruited to participate in a program in order to have enough students to make the comparison. Then some of the youth would have to be assigned to the program, some to an alternative, and some to no program at all. Such a scenario is unlikely to happen because program officials would have to withhold the program from some youth or a lottery system would have to be instituted to determine who can participate the program.

It is possible, however, to find alternative ways of comparing youth in the program with others. Program planners can compare youth in the program with profiles of the academic and other characteristics and performance of a representative sample of students that are contained in district, state, and national survey data. These data are readily available, often in the popular media.

DATA COLLECTION. Program officials need to plan ahead and obtain data at the outset of a program, if they are to evaluate its impact. Enrollment applications, placement information, and initial interviews contain information about students' attitudes and knowledge before they enter the program. During the operation of the program a management information system can document students' participation, progress, and accomplishments. To obtain additional information, the youth can also be observed in various activities; they can be interviewed individually, in pairs, and in groups; and they can keep individual logs. Finally, the program staff can keep extensive case histories on a small sample of participants.

Interviews, logs, and observations provide descriptive information, but programs also need standardized information about the participants' attitudes, knowledge, plans, and achievements. This can best be collected from questionnaires and other instruments, and from school records. Program officials need not develop these questionnaires. Nationally developed instruments that contain items relevant to many kinds of programs are readily available. Follow-up questionnaires at six months, one year, and two years, using the same survey instruments, are also useful for demonstrating the sustaining effects of the program and the participants' progress.

ANALYSIS AND REPORTING. At a minimum program planners need to conduct two kinds of analysis. One kind examines whether the program model produced the anticipated outcomes. Another kind of analysis investigates how much the magnitude and intensity of the intervention contributed to the outcomes. A third demonstrates the differential impact of different kinds of interventions. And a fourth kind of analysis could present the program outcomes for subgroups of program participants, like girls.

Analysis of the data about program outcomes is not enough, however. In presenting the results of the analysis the staff needs to consider the possible explanations of the outcomes (or lack of outcomes). In the real world of youth programs, staff and youth drop out of the program, conditions require that the design of the program be altered, and local funds and other support come and go. Moreover, the results should be reported and disseminated in a form useful to other program planners. Usually the public report need only contain a synopsis of the program characteristics and findings. Those needing more specific or other information can pursue their request individually.

MANAGEMENT OF THE EVALUATION. Although an evaluation is essential to carrying out a successful program, it should never be a burden for the staff or participants. It should be managed to provide the knowledge needed to conduct the program successfully and tailored to fit into the program management. Choosing an outsider to conduct the evaluation is not always necessary, although an outsider can provide the unbiased point of view necessary to demonstrate the effectiveness of the program to a wider audience than can the program planners and staff. Assigning the responsibility for the evaluation to the program staff, however, has several advantages. It makes the evaluation part of the program management rather than a separate activity; and it encourages the staff to use the evaluation process to solve any design, implementation, or operational problems.


Most youth programs are small, local, service-driven, and improvised, and these are not the best conditions to carry out a program or conduct an evaluation. Nevertheless, program staff can take some simple actions to obtain data about the effectiveness of the program that will help them improve the program, report the results to administrators and funders, and inform other professionals and the public about the quality and performance of the program.

First, at the outset, program planners and staff must commit to determining the effectiveness of the program and its impact on participants, and assign staff and time for the evaluation. They must include evaluation at every point of the implementation of the program-recruiting and selecting the youth, determining and carrying out program activities, staffing, and so on. This is no small undertaking, but without it there can be no assessment of the program because planners and staff will have no design to evaluate and no data to use to interpret the impact of the program.

Second, to understand the impact of a program, planners must know what the "program" is. Youth programs are particularly vulnerable to poorly-articulated goals and undefined activities because the mystique of youth programs encourages everyone to believe that because only good can come from them nothing needs to be planned. Too often both the plans and goals for the programs are global and never specified in ways that can be translated into discrete activities whose impact can be evaluated.

Third, program staff must identify data sources and begin to collect information on the implementation of the program and on the characteristics of the participants, before and after the program. This is not difficult to do, despite the fears of many program staff, because they already have some of the data from student records, and they can collect the rest from interviews, observations, logs, and already existing questionnaires adapted for local use. It is important, however, to be clear about why the data are being collected: for example, to evaluate the effectiveness of a strategy, or to determine whether the program has the right components.

Fourth, these data must be analyzed and interpreted. The analysis, again, can be very simple, but still be effective and useful. Quantitative data show absolute changes in achievement test scores, attendance patterns, referrals for discipline problems, or knowledge about behaviors that avoid risk. Qualitative data indicate relative changes in the behavior of program participants. Qualitative data, usually culled from interviews and observations, are more difficult to analyze because the program staff must come up with checklists and coding schemes to ascertain whether the participants have changed as a result of the program, but this, too, can be done if the staff determines the desirable program outcomes at the outset. The findings of the evaluation must then be interpreted, for data are valueless without an explanation of their meaning. The staff likely will have rich and complex explanations for the findings, based on members' experience with all aspects of program. Again, taking the time to interpret the data thoughtfully is critical, particularly where there are no discernible outcomes for the participants or where the program was never properly implemented.

Finally, after many evaluations of youth programs, researchers are now realizing that no matter how good the policy for developing the program, or its design, some programs achieve excellent results, some have no outcomes, and some even have a negative impact on the participants. This is true even when the clients and program services are similar. It is possible, of course, that data about the program's effectiveness were not collected well, or that the findings were not interpreted properly, but it also may be that the programs were not managed effectively.

Youth programs, particularly, often lack stable and mature management structures because they are add-ons or afterthoughts. Very often these programs do not have the clearly articulated mission and identity that determines the kind of services they provide and informs the potential participants and the public about what they can expect from the program. Having a mission is crucial, though, to the successful targeting and recruitment of both the participants and the right staff to manage the program. Those interested in understanding the effectiveness or performance of a youth program need to understand that if it is not well-managed, the intervention will not be effective, or possibly even be implemented. The only result that the evaluation or performance analysis shows, then, is that the program never really happened, despite appearances.


The discussion of process and impact evaluations is a summary of material appearing in Evaluating School-to-Work Transition (1995), by Margaret Terry Orr, available from the National Institute for Work and Learning, Academy for Educational Development, Washington, D.C. The discussion of the management of youth programs briefly summarizes the discussion in Managing Youth Development Programs for At-Risk Students: Lessons from Research and Practical Experience (1992), by Andrew Hahn, available from the ERIC Clearinghouse on Urban Education, Box 40, Teachers College, New York, NY.


Library Reference Search

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

| privacy