With increased student achievement in science being a national goal (AMERICA 2000, 1991), do we know what students are learning? Given the emerging national standards in science education [National Committee on Science Education Standards and Assessment (NCSESA), 1993], how will we determine whether students measure up to the standards? Radical changes are underway for school science curricula [Rutherford & Ahlgren, 1990; The National Science Teachers Association (NSTA), 1992], but are complementary changes in assessment in progress? States are developing student assessments based on science frameworks or guides (Blank & Engler, 1992; Davis & Armstrong, 1991), but do we know how to assess student performance in all the domains of interest and concern? Assessment of student performance is emerging as a crucial ingredient in the recipe for ongoing improvement of school science. As programmatic change is occurring, there is a need to align student assessment practices with curricular aims, instructional practices, and performance standards. In short, "What we teach must be valued; what we test is what must be taught" (Iris Carl as quoted in McKinney, 1993).
In this digest, the focus is on assessment in the service of instruction, for helping students, teachers, and parents monitor learning. Assessment in this context must be unobtrusive and tailored to measure specific learning outcomes, not necessarily norm-referenced and generalizable across schools, states, and countries (Haertel, 1991). What are the issues and methods of assessment in the context of classroom instruction?
Among the new labels being used today is performance-based assessment. Though there are a variety of definitions, it is clear that performance-based assessment does not include multiple-choice testing or related paper-and-pencil approaches. According to Jorgensen (1993), "performance-based assessment requires that the student complete, demonstrate, or perform the actual behavior of interest. There is a minimal degree of inference involved." Baron (1991) has provided a list of characteristics of performance assessment tasks, with a notable blending of content with process, and major concepts with specific problems. As Kober (1993) has mentioned, "in this type of assessment, students may work together or separately, using the equipment, materials, and procedures they would use in good, hands-on science instruction."
As an example of how alternative assessment strategies can enable students to show what they know in a variety of knowledge domains, consider the approach taken in one urban school (Dana, Lorsbach, Hook, & Briscoe, 1991). Concept mapping and journal writing techniques are used to document conceptual change among students, and student presentation and interview techniques allow learners to communicate their understanding in ways that rely less on reading and writing skills. For additional samples of techniques being used, see the Appendix of Kulm and Malcom (1991).
Among the promising alternative assessment techniques are the use of scoring rubrics to monitor skill development and the use of portfolios to assemble evidence of skill attainment. Scoring rubrics can be used to clarify for both students and teachers how valued skills are being measured (Nott, Reeve, & Reeve, 1992). Portfolios documenting student accomplishments can take a variety of forms, with student products, collected data, or other evidence of performance being used as information for self, peer, or teacher evaluation (Collins, 1992).
It should be acknowledged that there are drawbacks to performance assessments. Staff development will be required, performance assessments take more time than conventional methods, standardization is difficult, and the results may not be generalizable from one context to another. These problems reinforce the importance of practitioners, assessment specialists, and assessment "consumers" being clear on the purposes of specific assessment activities. There is no one approach to assessment that will best serve all functions, knowledge domains, and learners.
Hein, G. (Ed.). (1990). The assessment of hands-on elementary science programs. Grand Forks, ND: Center for teaching and learning, North Dakota University. ED 327 379 (This document examines a wide variety of issues related to assessment, including a section on new approaches to science assessment.)
Herman, J. L., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for Supervision and Curriculum Development. (This resource addresses several key assessment issues and provides concrete guidelines for linking assessment and instruction, and for assessment design.)
Kulm, G., & Malcom, S. M. (Eds.). (1991). Science assessment in the service of reform. Washington, DC: American Association for the Advancement of Science. (This is a compilation of contributed chapters that treat policy issues and the relationships between assessment and curriculum reform, and between assessment and instruction. Several practical examples from the field are also included.) ED 342 652
Meng, E., & Doran, R. L. (1993). Improving instruction and learning through evaluation: Elementary school science. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. (This is a practical guide for teachers and anyone else involved in assessing student performance in elementary school science. Separate sections focus on assessing science process skills, concepts, and problem-solving.) ED 359 066
Raizen, S., & others. (1990). Assessment in science: The middle years. Andover, MA: The NETWORK, Inc. ED 347 045 (This document is part of a set of reports that focus on science and mathematics education for young adolescents. Practical guidelines for assessment are provided for policymakers and practitioners on the basis of research findings and recommendations gleaned from the literature. New directions in assessment are discussed.) ED 347 045
Science Scope, 15(6). (This issue of March, 1992 includes a special supplement on alternative assessment methods in science, including sections on performance-based assessment, the use of portfolios, group assessments, concept mapping, and scoring rubrics.)
Semple, B. M. (1992). Performance assessment: An international experiment. Princeton, NJ: Educational Testing Service. (This document describes an attempt to supplement the pencil-and-paper approach of the International Assessment of Educational Progress in mathematics and science with a performance component. Both the results of the experiment and full descriptions of the performance tasks are provided, including tasks that focus on problem solving, the nature of science, and physical science concepts.)
White, R., & Gunstone, R. (1992). Probing understanding. New York: Falmer Press. (A practical but theoretically sound guide to alternative approaches to assessing understanding through application of nine types of PROBES: concept mapping, prediction-observation-explanation, interviews about instances and events, interviews about concepts, drawings, fortune lines, relational diagrams, word associations, and question production.)
Baron, J. B. (1990). How science is tested and taught in elementary school science classrooms: A study of classroom observations and interviews. Paper presented at the annual meeting of the American Educational Research Association, Boston, April.
Baron, J. B. (1991). Performance assessment: Blurring the edges of assessment, curriculum, and instruction. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp. 247-266). Washington, DC: American Association for the Advancement of Science. ED 342 652
Blank, R. K., & Engler, P. (1992). Has science and mathematics education improved since "A nation at risk?" Washington, DC: Council of State School Officers.
Collins, A. (1992). Portfolios: Questions for design. Science Scope, 15(6), 25-27.
Dana, T. M., Lorsbach, A. W., Hook, K., & Briscoe, C. (1991). Students showing what they know: A look at alternative assessments. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp. 331-337). Washington, DC: American Association for the Advancement of Science.
Davis, A., & Armstrong, J. (1991). State initiatives in assessing science education. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp 127-147). Washington, DC: American Association for the Advancement of Science. ED 342 652
Haertel, E. H. (1991). Form and function in assessing science education. In G. Kulm & S. M. Malcom, (Eds.), Science assessment in the service of reform (pp 233-245). Washington, DC: American Association for the Advancement of Science. ED 342 652
Herman, J. L., & Golan, S. (1992). Effects of standardized testing on teachers and learning--Another look (CSE Technical Report 334). Los Angeles: National Center for Research on Evaluation, Standards and Student Testing, University of California. ED 341 738
Jorgensen, M. (1993). Assessing habits of mind: Performance-based assessment in science and mathematics. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Science Education.
Kober, N. (1993). What we know about science teaching and learning. Washington, DC: Council for Educational Development and Research.
McKinney, K. (1993). Improving math and science teaching. Washington, DC: Office of Educational Research and Improvement, U.S. Department of Education. SE 053 492
Meng, E., & Doran, R. L. (1993). Improving instruction and learning through evaluation: Elementary school science. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. ED 359 066
National Committee on Science Education Standards and Assessment. (1993). National science education standards: An enhanced sampler. Washington, DC: National Research Council. SE 053 554
National Science Teachers Association. (1992). The content core. Volume 1 in Scope, Sequence and Coordination of Secondary School Science. Washington, DC: Author.
Nott, L., Reeve, C., & Reeve, R. (1992). Scoring rubrics: An assessment option. Science Scope, 15(6), 44-45.
Raizen, S., & Kaser, J. (1989). Assessing science learning in elementary school: Why? What? and How? Phi Delta Kappan, 70, 9.
Rutherford, F. J., & Ahlgren, A. (1990). Science for all Americans. New York: Oxford University Press.
Tippins, D. J., & Dana, N. F. (1992). Culturally relevant alternative assessment. Science Scope, 15(6), 50-53.