ERIC Identifier: ED447199
Publication Date: 2000-11-00
Author: Brem, Sarah K. - Boyes, Andrea J.
Clearinghouse on Assessment and Evaluation College Park MD.
Using Critical Thinking To Conduct Effective Searches of Online
Resources. ERIC Digest.
More than 80 percent of academic, public and school libraries offer some form
of Internet access (American Library Association, 2000); thousands of full-text
electronic journals and serials are available online. However, most searches of
these materials are cursory and ineffective(Hertzberg & Rudner, 1999). This
Digest complements guidelines addressing the mechanics of online searching by
considering how treating information searches as exercises in critical thinking
can improve our use of online resources. It addresses the use and application of
metacognition, hypothesis testing, and argumentation.
Metacognition is thinking about thinking
(Butler & Winne, 1995): What do I know? What do I not know? Will I ever find
an answer? Knowing what we don't know helps us focus our questions, and how long
and hard we look for an answer depends on how likely it seems that we'll find a
Suppose we want to assess the wisdom of high stakes testing, but are
unfamiliar with the issue. We might simply enter the phrase "High Stakes Tests"
into the ERIC database and retrieve 90 citations. If we quit there, however, we
miss items that would be retrieved by combining terms such as "Accountability"
with "Test Validity" or "Educational Testing." These searches would produce
dozens of additional citations for journal articles, papers, and the like, thus
enriching our inquiry. At the other end of the spectrum, we may waste time
looking for information that no one has, such as how a small subset of the
population performs on a particular test. In short, we need to be able to assess
the quality of our search.
Once they locate information, people often overlook inconsistencies or
conflicts. Searches typically produce a loosely-connected cluster of articles of
varying relevance and contrasting opinions. Inquiries are often weakened because
disconnected knowledge allows conflicts between articles to go undetected;
positions are not explicitly compared. In addition, inconsistencies within a
text may be overlooked because readers tend to form a framework early on--we
think we know what the article is about, and miss anything that doesn't fit our
framework (Otero & Kintsch, 1992).
Can We Improve Metacognition in Online Searching?
Improving metacognition means improving our ability to monitor what we know
and how we know it. Here are some ways to accomplish this:
Put the project aside for a brief time. Taking a break helps in several ways.
When immersed in the process, people often feel they've learned more than they
really have. Nelson and Dunlosky (1991) found that a short break improves the
ability to accurately assess what's been learned. Also, returning to a problem
repeatedly over time improves memory and comprehension, and allows us to take a
slightly different perspective each time.
Talk it out. Chi, deLeeuw, Chiu, & LaVancher (1994) find that keeping up
a running dialogue with oneself is effective in highlighting inconsistencies and
gaps in knowledge. Suppose we read a paper on testing and come across the claim
that "passing cutoffs are set arbitrarily." As we attempt to tell ourselves what
arbitrary cutoffs means, we realize we don't really know. We can then reread
looking for this information, or ferret out additional sources.
Once we've collected a substantial body of knowledge, we can lay out the pros
and cons to ourselves or a live audience. Concept mapping can also improve
metacognition. Its use is discussed below.
Develop content knowledge. Brem & Rips (in press) found that people who
are capable of critical thinking nevertheless fall for weaker arguments when
they lack relevant information. Thus, to a certain extent, metacognition and an
effective inquiry depend upon building expertise. Nevertheless, we can
compensate in the early stages by taking advantage of the content support
afforded by online resources.
Many databases provide thesauri--lists of alternative ways of accessing a
content area. For example, the ERIC Wizard(http://searcheric.org) uses a
thesaurus for widening and narrowing searches. We can construct our own thesauri
as well. Examining our initial hits on "High Stakes Tests, "we find other
descriptors and keywords associated with these articles--some relevant
("Accountability"), some not ("Copyrights"). The most relevant become our
thesaurus and guide additional searches.
When you don't know, find someone who does. We're often reluctant to admit
ignorance, but if we've already tried the strategies above, it's likely that the
remaining questions are good, hard questions. Reference librarians, instructors
and colleagues can help in locating additional sources and perspectives.
Expertise is also available on demand through ERIC Digests and ERIC FAQs
(http://ericae.net/nav-lib.htm), which consolidate and synthesize existing
information. These documents also help in developing a sense of the overall
quality and quantity of evidence available about a topic. For the testing
example, ERIC has ten FAQs related to assessment, and ten Digests are retrieved
by the phrase "high stakes tests." The syntheses of others cannot substitute for
working through the issue; in fact, our preparation will help us read these
documents with a critical eye and extract relevant information.
Searching the literature should be an
exercise in hypothesis testing. We hold a certain position on an issue, or
construct a position along the way. As we proceed, we need to test and modify
this position. The problem is that hypothesis testing is often self-fulfilling.
Once we form an opinion, we tend to focus on sources that support our position
and distort data to make the strongest case (Koehler, 1991). Fortunately, we can
combat this process.
Can We Improve Hypothesis Testing in Online Searching?
Actively pursue alternative hypotheses. We need to fight the tendency to
consider only one side of a debate. One of the easiest ways to do this is simply
to consider the opposite. Suppose we uncover evidence supporting high-stakes
testing. Formulate the opposite opinion--high-stakes testing is a bad idea--and
actively work to support this claim. Once we've made an earnest attempt to
explore both claims, we can weigh the positions side by side.
Develop an evaluativist stance. People frequently fall into an absolutist or
multiplist perspective. They see the world in black and white, with clear right
and wrong answers (Absolutist), or as filled with myriad possibilities, all of
which are more or less equally valid (Multiplist). In contrast, adopting an
evaluative viewpoint involves recognizing that while there are no right answers,
there are better and worse answers, and we can identify them by weighing the
evidence. Evaluative approaches are associated with more effective reasoning
(Kuhn, 1991), and the strategies described in the next section can aid in the
As we encounter different perspectives, we
need a way to decide among them. Which position does the evidence best support?
Which sources of evidence and opinions are most reliable? Once we adopt an
evaluativist stance, argumentation strategies help us carry out our evaluation.
Can We Improve Argumentation in Online Searching?
Consider the structure and reliability of a source. For example, ERIC is a
self-contained resource; all information accessed within ERIC meets ERIC
standards. In contrast, Web sites often link multiple sources--some more
reliable, some less reliable than the site we came from. We need to assess the
reliability of every source before we include it in our analysis. Critical
thinking guidelines (e.g., Harris, 1997; Kirk, 2000) provide criteria for
Remember that even reputable sources are fallible. Even the most trusted
resource is the work of many people who have different ideas regarding what an
article is about and how to describe it. They can make typographical errors.
These inconsistencies and mistakes can compromise an inquiry, so it's important
to ask whether the results of a search are accurate and complete. The initial
goal should be to collect as much relevant information as possible, as it is
always possible to narrow the search later.
First, don't initially limit the terms of a search; a broad range of keywords
and descriptors increases the likelihood of hitting on the terms chosen by the
person entering the record. Second, don't limit which fields are searched. For
example, ERIC has "major descriptors" and "minor descriptors"; searching on both
maximizes the number of hits. Another example is limiting searches on an
author's name to the author field. This seems reasonable, but it misses items
with the author's name in the abstract or text, which often present the
arguments of opponents and supporters, key pieces of the puzzle. Finally,
consider searching on common misspellings, or truncating a term using wildcards
to include variations.
Use systematic analysis for a comprehensive (though time-consuming)
evaluation. Systematically analyzing an issue takes some time and effort, but
generally provides the most complete and accurate evaluation. Systematic
analysis involves identifying each claim and asking whether each piece of
evidence really supports or refutes it. One popular aid to systematic analysis
is using concept maps to visualize the relationship between claims and evidence.
For example, suppose we are searching to see whether we should accept the
claim that testing improves student outcomes. We place this claim on a map. When
our searches produce a piece of information that supports or attacks this claim,
we place a brief description of the evidence on the map and draw lines
connecting evidence to claims, choosing lines of different colors or styles to
distinguish between supporting and refuting evidence. We also connect pieces of
evidence when they attack one another or back each other up. Font size is one
way to indicate source reliability (e.g., bigger means more reliable). A map can
be made for each alternative viewpoint.
In the resulting visual representation of the debate, a dense web of
supporting evidence gives us a solid basis for accepting a claim, and a dense
web of refutations provides us with reason to reject it. If the evidence seems
evenly mixed, or if two alternatives produce equally strong maps, we can
continue looking, or we may simply decide that there is no consensus on this
issue. In addition, maps support metacognition; holes and smaller text mean
holes and weaknesses in the argument, telling us where more information is
Mapping software can facilitate the process (commercial and shareware
packages are reviewed at
http://www.ozemail.com.au/~caveman/Creative/Software/swindex.htm), but paper and
pencil will do. If mapping proves too time-consuming, even a simple list of
points for and against a claim is useful. For important decisions, though,
mapping is preferred because it includes how claims and evidence are
Heuristics are useful when we need to make quick decisions, when there is not
enough information for systematic analysis, or to complement systematic
approaches. Heuristic evaluation involves making a calculated guess about the
quality of an argument. It's usually easy, but not always accurate. For example,
deciding to trust someone's argument because he or she holds a position at a
prestigious university is a heuristic--we haven't actually taken the argument
apart. It's often a good guess, but even Nobel prize winners have been known to
hold a crackpot theory or two. The critical thinking guides mentioned above
discuss signs of reliability, and incorporating these into concept maps can
enrich our evaluation.
Perhaps the biggest challenge in using heuristics is remembering that a guess
is only a guess. This is a metacognitive issue of remembering how we know what
we know. Talking out inquiries will help highlight the assumptions underlying
heuristics, and using a special color for heuristic contributions to concept
maps keeps their status clear.
Searching for information online is an exercise
in critical thinking, and becoming an expert in critical inquiry takes practice.
The guidelines provided above can help in directing and channeling this
practice. They offer scaffolding while we gain expertise.
American Library Association (2000). LARC Fact
Sheet No. 26: How many libraries are on the Internet? [Available
Brem, S. K. & Rips, L. J. (in press) Explanation and evidence in informal
argument. Cognitive Science.
Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A
theoretical synthesis. Review of Educational Research, 65, 245-281.
Chi, M. T. H., deLeeuw, N., Chiu, M., & LaVancher, C. (1994). Eliciting
self-explanations improves understanding. Cognitive Science, 18, 439-477.
Harris, Robert (1997). Evaluating Internet research sources. [Available
Hertzberg, S. & Rudner, L. (1999). The quality of researchers' searches
of the ERIC database. Education Policy Analysis Archives. [Available online:
Kirk, E. E. (2000). Evaluating information found on the Internet. Available
online at [http://milton.mse.jhu.edu:8001/research/education/net.html].
Koehler, D. (1991). Explanation, imagination, and confidence in judgment.
Psychological Bulletin, 110, 499-519.
Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University
Nelson, T. O. & Dunlosky, J. (1991). When people's judgments of learning
(JOLs) are extremely accurate at predicting subsequent recall: The 'delayed-JOL
effect.' Psychological Science, 2, 267-270.
Otero, J. & Kintsch, W. (1992). Failures to detect contradictions in a
text: What readers believe versus what they read. Psychological Science, 3,
This Digest is based on a paper originally appearing in Practical Assessment,
Research & Evaluation, 7 (7).