ERIC Identifier: ED297303
Publication Date: 1988-00-00
Author: Kress, Roy
Source: ERIC Clearinghouse on Reading
and Communication Skills Bloomington IN.
Some Caveats When Applying Two Trends in Diagnosis: Remedial
Reading. ERIC Digest Number 6.
Among the trends that have emerged in recent years to help diagnose the
remedial reader are some which--applied with caution--may be of reasonable value
to the clinician and the teacher. One of these trends has been the promotion of
informal assessments, and an accompanying plethora of commercial informal
reading inventories (IRIs). These instruments are designed to replace any that
might be made by the teachers and clinicians who use them, and thus they should
be examined carefully in terms of how well they serve teaching and clinical
CUSTOMIZING IRIs TO MINIMIZE THEIR LIMITATIONS
Klesius and Homan (1985) responded to the emerging prominence of these
instruments by suggesting ways that their reliability and validity could be
improved by the teachers and clinicians using them. They recommended tape
recording the student reading and his or her responses to questions so that they
can be reviewed. In this way, all miscues can be identified and responses to
comprehension questions can be carefully considered. Klesius and Homan
recommended that items which could be answered without reading the passage be
eliminated, that possible appropriate answers one's students give--but which are
not listed in the inventory's directions--be added, and that questions which
appear to be worded too awkwardly for the child being tested to grasp be
Klesius and Homan advised that only overall comprehension scores be used and
that subskill scores based on just a few items should not be analyzed or used.
They would place more emphasis on comprehension, however, than on miscue
analysis and recommended watching for signs of frustration, no matter how well a
student performs on the inventory.
It is highly impractical to expect either IRIs or "standard reading
inventories" developed recently or even in the future to respond to all the many
criticisms of reading tests, as Henk (1987) seems to think they can. But many
IRI instruments now published do seem quite limited. Some assess only oral
reading and miscue analysis, while the more comprehensive ones measure oral and
silent reading comprehension and word recognition, both in isolation and in
Only those IRIs accompanying basals tend to reflect the original concept of
the IRI, which assesses a child's reading behavior in the materials actually
used in his or her classroom instructional program. None provides the
opportunity to observe how the reader goes about comprehending the information
presented or how special textbook features, such as the table of contents, the
glossary or index, footnotes, pictorial material and graphs, a pronunciation
guide, etc. are used.
The skills learned by the teacher in choosing the selections for an IRI and
in constructing and revising the questions to be used are lost when published
IRIs are used instead of teacher-designed instruments. The experience of
constructing an IRI, which should be a part of preservice and inservice
programs, trains teachers and clinicians alike to be more accurate observers of
Several studies reported in the ERIC database express concern about the
inconsistent results yielded by published IRIs when they are compared to each
other (Newcomer, 1985) or to standardized instruments such as the Durrell
Analysis of Reading Difficulty (Nolen and Lam, 1981).
USING IRIs TO SELECT INSTRUCTIONAL MATERIALS
IRIs are frequently used to place readers in materials of appropriate
difficulty, and thus readability issues are relevant to the use of the
assessments. Some studies report that acting on the results of an IRI will lead
to placement in reading materials that are significantly less difficult than
those particular standardized tests would recommend. To some reading
specialists, it is harmful to place children in unnecessarily low reading groups
(Eldredge and Butterfield, 1984). Powell (1982) describes a method that responds
to this concern. Teaching and diagnosis begin together with a lesson that
develops motivation, background, vocabulary assistance, and purpose-setting for
a particular text. Then the student reads the text aloud and the teacher records
miscues for analysis. This procedure operates as a kind of IRI that identifies
what Powell calls "the emergent reading level"--what the student can read with
Cadenhead (1987) suggests that gearing instruction to "reading levels" is
relying on a myth that thwarts the challenge that more advanced material can
evoke in children. Doing so, he contends, eliminates a "reasonable balance
between success and challenge for the learner." While many of his arguments are
quite valid for the achieving reader, they are inappropriate for the child who
is a remedial reader and has experienced repeated doses of failure with printed
material. Many experienced teachers and clinicians are aware of the need to
follow the policy of identifying materials that will insure success when the
remedial reader attempts to process text (e.g., Forell, 1985).
Some published IRIs include materials and strategies built into the
diagnostic procedure, and these lead the teacher or clinician to use them with a
problem reader before the result of the test can determine the inventory's
specific recommendations for remediation. Some of these varied approaches are
based on a contention that children will learn more readily when instruction is
geared to modal preferences they may have. This seemingly logical assumption is
reoccurring in the literature; but it appears to be as far from being
substantiated as it was in 1972, when Robinson demonstrated that instructional
emphases matching modal preferences do not appear to improve learning.
RECOGNIZING THE LIMITATIONS OF COMPUTERIZED
Another trend in reading diagnosis may limit the sensitivity of a
clinician's or teacher's analysis of individual student needs. Accompanying many
published diagnostic instruments are computer software programs that eliminate
the need of the test administrator to truly examine the data. The computer can
thus be used to analyze a student's performance and to produce several printout
pages of the objective results, interpretations of them, and recommendations
based on them--a service that must by necessity be based on some arbitrarily
selected standards of performance--if not on a norming procedure. Colbourn
(1982) describes an early protocol of such a program developed by comparing
diagnostic reports written by both humans and machines.
Even at its best, such a computer analysis cannot match the essential
benefits of an IRI--its ability to individualize the diagnosis of a reader. It
should be obvious that computer scoring limits the opportunity of the clinician
or teacher to become ever more sensitive to how particular signs of reading
behavior relate to potentially effective remediation.
Many of the diagnostic instruments which provide computerized scoring, are
themselves administered by computer. Branching computer software has the ability
to offer a significantly larger number of packaged items individually to the
student who finds a particular subskill difficult, increasing the reliability of
that subscore. The information produced by such instruments would be of value as
a part of the collection of data that clinicians and teachers consider in
placement and other instructional decisions; it is difficult to see how they can
ever become the single--or even major--informant of such decisions, however.
INCORPORATING COMPUTERIZED DATA INTO INSIGHTFUL CLINICAL
Computerized diagnoses can now assess only the simplest aspects of
comprehension, and that is almost invariably done with multiple-choice items. An
in-depth assessment of comprehension can be made only through careful probing of
the reader's understanding. This demands a face-to-face questioning situation.
Such inventories cannot yet analyze miscues; nor can they analyze or evaluate
responses to open-ended comprehension items. And certainly they cannot note the
frustration or deliberation that Klesius and Homan argue is indicative of
material that is too difficult even when students answer the accompanying
questions correctly. The ability of these computer-driven instruments to
diagnose the problems of individual readers is limited to analyses based on
responses to a very fixed set of questions.
Teachers and clinicians need to make use of many tools to guide their
decisions, and published diagnoses accompanied by computer software are among
them. It is, nonetheless, important to remain aware that--at its best--diagnosis
is a dynamic, insightful process, replete with delicate clinical probing of
children's responses that cannot be replicated by a computer.
Precise assessment of a reader's strategies for handling printed material is
in the realm of the trained diagnostician. It can be obtained only through
careful observation of reading behavior and detailed analysis of the resultant
understanding. A diagnostically oriented directed reading activity or the use of
an individual informal reading inventory is a prerequisite.
Cadenhead, Kenneth. "Reading level: a metaphor that shapes practice," Phi Delta Kappan, 68 (6), February 1987, pp. 436-441.
Marlene Jones. Computer-guided Diagnosis of Learning Disabilities: A Prototype. Master's Thesis, University of Saskatchewan, Canada: 1982. 203pp. [ED 222 032]
Eldredge, J. Lloyd, and
Butterfield, Dennie. "Sacred cows make good hamburger." A report on a reading research project titled "Testing the sacred cows in reading," 1984. 93pp. [ED 255 861]
Forell, Elizabeth R. "The
case for conservative reader placement," Reading Teacher, 38 (9), May 1985, pp. 857-862.
Henk, William A. "Reading
assessments of the future: toward precision diagnosis," Reading Teacher, 40 (9), May 1987, pp. 860-870.
P., and Homan, Susan P. "A validity and reliability update on the informal reading inventory with suggestions for improvement," Journal of Learning Disabilities, 18 (2), February 1985, pp. 71-76.
Newcomer, Phyllis L. "A comparison of two published reading inventories," Remedial and Special Education (RASE), 6 (1), January-February 1985, pp. 31-36.
Nolen, Patricia A., and Lam, Tony C. M. "A
Comparison of IRI and Durrell Analysis of Reading Difficulty reading levels in clinical assessment," .
Powell, William R. "The emergent reading level: a new
concept." Paper presented at the Annual Southeastern Regional Conference of the International Reading Association, 1982. 17 pp. [ED 233 334]
M. "Visual and auditory modalities related to methods for beginning reading," Reading Research Quarterly, 8 (1), Fall 1972, pp. 7-39.