Site Links



Search for ERIC Digests


About This Site and Copyright


Privacy Policy

Resources for Library Instruction


Information Literacy Blog

ERIC Identifier: ED315427
Publication Date: 1989-12-00
Author: Ayers, Jerry B.
Source: ERIC Clearinghouse on Tests Measurement and Evaluation Washington DC., American Institutes for Research Washington DC.

Evaluating Workshops and Institutes. ERIC Digest.

Workshops and institutes are widely used to upgrade and renew the skills of teachers and other professionals. If you organize any program, such as a workshop or an institute, you should also make sure that it is evaluated. Funding agencies and institutional sponsors want answers to questions like these:

o Did the program have an impact on the participants?

o What impact did it have?

o How did it achieve the impact that it had?

If you include an evaluation component in your plans for the workshop or institute, you will be able to answer questions like these in a formal, systematic manner.


A well-planned and well-conducted evaluation can provide useful information to funding agencies, sponsoring institutions, instructors, and participants. Evaluation data can serve

o to show the real worth of a program,

o to show where to improve future workshops and institutes,

o to justify funds expended, and

o as a basis for rational decisions about future funding or sponsorship.


Evaluating an instructional program, such as a workshop or institute, means collecting, organizing, analyzing, and reporting data about a number of features of the instructional program and its impact on the participants. Evaluating a workshop or institute can help you decide how you are doing or how you did in at least four areas:

1. Planning: deciding on the domain (topics, overall content) of the workshop or institute, the major goals, and the more detailed objectives.

2. Programming (or setting up the logistics): deciding on the procedures for running the workshop or institute, the faculty, facilities, budget, and other resources needed.

3. Conducting the workshop or institute: deciding on the activities that make up the workshop or institute.

4. Changing the workshop or institute: deciding when and why to continue, evaluate, change, or end the activities that make up the workshop or institute.


Effective evaluation will not just happen on its own. It must be carefully planned. A system for evaluating the workshop or institute must be put in place before the workshop or institute begins. If you want useful data, you must allocate adequate resources (people, time, money) to plan and carry out the evaluation.

The key to planning a useful evaluation is the same as the key to planning a successful workshop or institute. You must specify

o what you want the program to achieve, and

o what you expect participants to be able to do as a result of the instruction.

Note that you must, therefore, plan your evaluation on two levels--evaluating both the overall effectiveness of the program and the progress that each participant makes towards the goals that you specify.


Many people have an image of evaluation as a questionnaire to fill out at the end of a workshop or institute. An effective evaluation is much more than that. You should plan for evaluation.

o during the program (called formative evaluation),

o at the end of the program or at the end of a specific part of the program (called summative evaluation), and

o at some point or points after the program (called follow-up evaluation).

You can use formative evaluation techniques to change the program while it is being developed and conducted. You can use summative evaluation techniques to assess how well participants and the program have met the goals at the end of the instruction time. You can use follow-up to assess the lasting effects of the workshop or instruction.

With each of these three types of evaluation (formative, summative, and follow-up), you can focus on either or both of the two levels that need to be evaluated (the participants and the program). In the next three sections, we share some ideas that you can use for each of these three types of evaluation. In the first two, we focus first on the participants and then on the program.


Participants. Have the program staff develop a "goal card" for each participant. Put on each card a statement of the goals of the workshop or institute and the behavioral objectives that will show when a participant has achieved each goal. When a participant has mastered the objectives and achieved the goal, check off that goal on that participant's goal card.

With this technique, both participants and faculty know at all times how a participant is doing. The cards can provide positive feedback to show that participants are making progress. They can highlight areas where more work is needed or where faculty should change the method of instruction. The cards can be used throughout the workshop or institute, providing an ongoing formative evaluation.

Program. You can get formative evaluation information on the program from both participants and faculty. In both cases, you can use a structured questionnaire.

Ask participants to fill out the questionnaire about half way through the program or at regular intervals for longer programs.

On the questionnaire, you can ask participants to rate the effectiveness of the instruction, the faculty, the logistics, and the social interaction of the workshop or institute.

Discuss the results of the questionnaire with participants and faculty at a group meeting. You can use the feedback on the questionnaire to change the program in midstream if improvements are needed. Ask participants to fill out the questionnaire again near the end of the workshop or institute to see if the changes that you made helped. Again discuss the results in a group meeting of participants and faculty.

Each instructor that works with the program should also fill out an evaluation questionnaire. The instructors' questionnaire should focus on the effectiveness of the workshop or institute and the teaching methods being used.

If you give this questionnaire midway through the workshop or institute you can change the program right then if needed. Otherwise, you can use this formative evaluation for future workshops and institutes. In a longer program, you might ask instructors to fill out the questionnaire more than once.


Participants. Summative evaluation usually means measuring what participants know or can do at the end of a given period of instruction. To measure participants' knowledge or skills, you can use tests that the faculty develop, or you can use standardized tests, if they are appropriate. To measure gains in knowledge or skills, you must test the participants both at the beginning of the workshop or institute and at the end of the instruction. Summative evaluation can also mean measuring changes in participants' attitudes. To measure these affective changes, you can use a semantic differential. Remember that to measure change in attitudes, participants must complete the semantic differential both at the beginning and at the end of the workshop or institute.

Program. To measure the overall effectiveness of the program, you can participants rate how well the workshop or institute met each program goal.


Follow-up studies can take several forms. You can send questionnaires to the participants to find out how much they are using what they learned in the workshop or institute and how they would rate it after being away from it for a while. You can visit participants on the job for observations or interviews either to measure participants' behaviors or to discuss the program's effectiveness.

Evaluating participants and the program sometime after the workshop or institute is probably the best measure of a program's real impact. Follow-up evaluation is both the most reliable and most costly of the three types that we have discussed. to decide on the cost-benefit trade-off, you have to weigh how easy or difficult it well be to do a follow-up evaluation, how much useful information you will get, and how you will use the information. Remember that you can use follow-up feedback not only to judge a past workshop or institute, but also to improve it for use later and to show further accountability to your sponsoring agency.


Alvir, Howard P. Albany, NY: New York State Department of Education, Bureau of Occupational Education Research, 1976, ERIC ED 120 224.

Ayers, Jerry B. A Plan for the Evaluation of Teacher Training Workshops and Institutes. (Report 87-2-3). Cookeville, TN: Tennessee Technological University, Center for Teacher Education Evaluation, September 1987, ERIC ED 291 795.

Ayers, Jerry B. and Mary F. Berney (Eds.). A Practical Guide to Teacher Education Evaluation. Boston: Kluwer Academic Publishers, 1989.

Hardrick, Juanita, et al. Workshop Helper: Tips on How to Run a Successful Workshop. Boston: Humphrey Occupational Resource Center, 1983, ERIC ED 235 395.

Stufflebeam, Daniel L. "Toward a Science of Educational Evaluation," Educational Technology. 1968, 9, pp. 5-12.


Library Reference Search

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

| privacy