Quality in Distance Education. ERIC Digest.
by Meyer, Katrina A.
What impact does distance education have on student learning? Is it
more effective than traditional education? Less effective? Or about the
same? Many studies that compare the two find no significant differences
in learning and other outcome measures. The perception is that most studies
on distance education or the use of technology are poorly designed and
prone to incomplete analyses. That certainly is true of the simple comparison
study, where student outcomes (such as course grades) for an online course
are compared to those of a traditional course. This is the source of the
"no significant differences" phenomenon, where possible intervening forces
are ignored. Often, the researcher and instructor are the same person,
further muddying the results. This design is flawed and the results are
However, there are some very good studies, some quantitative, others
qualitative, and still others thoughtful or theoretical analyses of what
is occurring in online courses. Some of these studies are quite creative
and use interesting approaches to analyze the online course or the student
learning resulting from using the Web in a course. Many of these studies
would pass the harshest peer review criteria. Others are less complicated
but no less worth reading.
It is unlikely we will ever unravel all of the factors that impact online
learning. It is complex and its elements (the technology and the students)
keep changing. Since we have not achieved a definitive answer on quality
for more traditional classroom situations, perhaps it is unwise to expect
such clarity for online learning. However, more understanding is always
better, so the search for clarity will continue.
THE NO SIGNIFICANT DIFFERENCE PHENOMENON
Perhaps the most quoted and misunderstood body of research on distanceeducation
has been the work of Russell (1999), who reviewed 355 studies ondistance
education produced from 1928 to 1998. Some of the early studiesexamined
correspondence courses, but most compared instruction overvideotape, interactive
video, or satellite with on-campus, in-personcourses. Students were compared
on test scores, grades, or performancemeasures unique to the study, and
also on student satisfaction.Consistently, based on statistical tests,
"no significant difference"between the comparison groups was found. However,
only 40 of the 355studies specifically included computer-based instruction,
and thecompilation was completed prior to the blossoming of courses using
It is important to understand the ramifications of Russell's work. Despite
the technology used, the results are the same: no difference in student
achievement. Russell concludes, "There is nothing inherent in the technology
that elicits improvements in learning," although "the process of redesigning
a course to adapt the content to the technology" can improve the course
and improve the outcomes" (p. xiii). In other words, learning is not caused
by the technology, but by the instructional method "embedded in the media"
(Clark, 1994, p. 22). Technology, then, is "merely a means of delivering
instruction," a delivery truck, so to speak, that does not influence achievement.
Russell concludes, "No matter how it is produced, how it is delivered,
whether or not it is interactive, low-tech or high-tech, students learn
equally well" (p. xiv). Russell expressed his frustration that, after so
many studies, people continue to believe that technology impacts learning.
OTHER COMPARISON STUDIES
Surprisingly, a large number of studies reviewed for the ASHE-ERIC Report
upon which this Digest is based still compare student achievement between
web-based versus in-person delivery models. Not surprisingly, the results
of studies by Bourne, McMaster, Rieger, and Campbell (1997), Davies and
Mendenhall (1998), Dominguez and Ridley (1999), Gagne and Shepherd (2001),
Hahn et al. (1990), Johnson (2001), McNeill et al. (1991), Miller (2000),
Mulligan and Geary (1999), Ryan (2000), Schulman and Sims (1999), Sener
and Stover (2000), Serban (2000), Wegner, Holloway, and Garton (1999),
Wideman and Owston (1999) remain largely the same as in Russell's compilation:
comparing the two types of delivery methods leads to a conclusion of no
significant difference in student achievement. However, several of these
studies found differences in completion or student satisfaction, although
final grades or exam scores were often the same, or nearly the same, between
the two types of courses compared.
If the comparison studies (Russell, 1999) accomplished anything, they
established that the technology studied did not make as much of a difference
in the selected learning outcomes as some expected. This is due to the
fact that interactive video (two-way audio and video conferencing) may
sufficiently duplicate the traditional classroom teacher-centered model
as to be indistinguishable from that model. Its instructional model is
"one-to-many" whether delivered in person via lecture, television, or via
interactive video. Or as Morrison (2001) remarked, "If you try to compare
media, you have to keep the instruction constant. If you keep it constant,
and the medium does not change the message/instruction, you will find no
OTHER STUDY RESULTS
Two studies are unique for their use of control variables. Kuh and Vesper
(1999) analyzed data on 125,224 undergraduates and found that to the extent
that the students became familiar with computers, there was a significant
and positive association with self-reported gains in self-directed learning,
writing, and problem solving (this study is unique for also having controlled
for such factors as grades, age, gender, parental education, and educational
aspirations). Another study by Flowers, Pascarella, and Pierson (2000)
modeled on the Kuh and Vesper research focused on cognitive impacts of
computer use during the first year of college. These results did not duplicate
the positive results of Kuh and Vesper (1999), and while the impact on
students at four-year colleges was nonsignificant, the results for community
college students were positive, indicating a difference in the type of
student enrolled in the two settings or their experiences while enrolled.
Positive results were found for use of word processing in reading comprehension.
AREAS FOR FURTHER RESEARCH
While many aspects of using the web have been investigated, other issues
have not. Research is needed into the usefulness or appropriateness of
the web for different disciplines or learning objectives. Fahy (2000) calls
this "technology's fitness for use" as a teaching tool, and asks - as others
have asked -- whether the technology is directly related to the learning
outcome. Are some technologies more appropriate for visual-based disciplines
and others better for discourse as Tuckey (1993) contends? Is the web good
for lower-division courses, but inadequate for graduate seminars? And finally,
what is the "best media mix" to achieve different learning goals (Harasim,
1996)? Or, as Barbules and Callister (2000) put the challenge, "Which technologies
have educational potential for which students, for which subject matters,
and for which purposes."
There may be an emerging answer to the series of "which" questions posed
by Barbules and Callister. In early studies of K-12 students studying science
reviewed by Helgeson (1988), the most effective combination of instructional
opportunities included hands-on laboratory experiences and computer simulations,
improving students' scientific thinking. This study is one of the first
that drew attention to the possibility that a mix of media may be the most
powerful means of education. Campos and Harasim (1999) found 55% of students
prefer mixed-mode classes: those that combine face-to-face and online activities.
Young (2002) describes "hybrid" teaching (or the "the convergence of online
and resident instruction") at several universities, which one university
president calls "the single-greatest unrecognized trend in higher education
today." Dziuban and Moskal (2001) found that courses with both a web and
face-to-face component produced the same or better success rates than courses
that were fully online or face-to-face. This result teases us into asking
whether there is some optimal combination of technologies - not limited
to face-to-face, interactive video, and Web - that maximize learning based
on the needs of the curriculum, the type of learning desired, and learner
characteristics. Over time, the correct question to ask may not be which
is better, but what combination is best.
Much of the research on Web-based courses (whether these are comparison
studies or case studies) indicates that students do as well or better and
are satisfied with their learning experiences. Ample interaction (with
material, students, and faculty) and constructivist learning situations
(e.g., project- and problem-based learning) enabled by the Web may be the
key to this improved performance. But student learning may also depend
on a number of individual qualities, including a positive attitude and
motivation, independence and sufficient computer skills, as well as a predominantly
visual learning style and an understanding that learning is not a passive
process of absorbing information. These individual differences will make
it difficult to promote any one approach as good for everyone.
Bourne, J.R., McMaster, E., Rieger, J., & Campbell, J.O. (1997).
Paradigms for on-line learning. Journal of Asynchronous Learning Networks,
Campos, M., & Harasim, L.M. (1999, July/August). Virtual-U: Results
and challenges of unique field trials. Technology Source. [http://horizon.unc.edu/TS/default.asp?show=article&id=562]
Clark, R.E. (1994). Media will never influence learning. Educational
Technology Research and Development, 42(2), 21-29.
Dziuban, C. & Moskal, P. (2001). Evaluating distributed learning
at metropolitan universities. Educause Quarterly, 24(4), 60-61.
Davies, R.S., & Mendenhall, R. (1998). Evaluation comparison of
online and classroom instruction for HEPE 129-Fitness and Lifestyle Management
course. (ED 427 752)
Dominguez, P.S., & Ridley, D. (1999). Reassessing the assessment
of distance education courses. T.H.E. Journal, 27(2). [http://www.thejournal.com/magazine/vault/A2223.cfm]
Fahy, P.J. (2000). Achieving quality with online teaching technologies.
(ED 439 234)
Flowers, L., Pascarella, E.T., & Pierson, C.T. (2000). Information
technology use and cognitive outcomes in the first year of college. Journal
of Higher Education, 71(6), 637-667.
Gagne. M., & Shepherd, M. (2001). A comparison between a distance
and a traditional graduate accounting class. T.H.E. Journal, 28(9). [http://www.thejournal.com/magazine/vault/A3433.cfm]
Hahn, H.A. & others. (1990). Distributed training for the reserve
component: Remote delivery using asynchronous computer conferencing. (ED
Harasim, L., & others. (1996). Learning networks. Cambridge, MA:
Helgeson, S.L. (1988). Microcomputers in the science classroom. ERIC/SMEAC
Science Education Digest, no. 3. (ED 309 050).
Johnson, S.M. (2001). Teaching introductory international relations
in an entirely web-based environment: Comparing student performance across
and within groups. ED at a Distance, 15(10).
Kuh, G. and Vesper, N. (1999). Do computers enhance or detract from
student learning? Paper presented at the annual meeting of the American
Educational Research Association, Montreal, Quebec.
McNeil, D.R., & others. (1991). Computer conferencing project. Final
report. (ED 365 307)
Miller, B. (2000). Comparison of large-class instruction versus online
instruction: Age does make a difference. [http://leahi.kcc.hawaii.edu/org/tcon2k/paper/paper_millerb.html]
Morrison, G.R. (2001). Theory, research and practice. ED at a Distance,
Mulligan, R. & Geary. S. (1999). Requiring writing, ensuring distance-learning
outcomes. International Journal of Instructional Media, 26(4), 387-395.
Russell, T.L. (1999). The no significant difference phenomenon. Raleigh:
North Carolina State University.
Ryan, R.C. (2000). Student assessment comparison of lecture and online
construction equipment and methods classes. T.H.E. Journal, 27(5). [http://www.thejournal.com/magazine/vault/A2596.cfm]
Schulman, A.H., & Sims, R.L. (1999). Learning in an online format
versus an in-class format: An experimental study. T.H.E. Journal, 26(11).
Sener, J., & Stover, M.S. (2000). Integrating ALN into an independent
study distance education program: NVCC case studies. Journal of Asynchronous
Learning Networks, 4(2). [http://www.aln.org/alnweb/journal/Vol4_issue2/le/sener/le-sener.htm
Serban, A.M. (2000). Evaluation of fall 1999 online courses. ED at a
Distance, 14(10). [http://www.usdla.org/html/journal/OCT00_Issue/story04.htm]
Tuckey, C.J. (1993). Computer conferencing and the electronic white
board in the United Kingdom: A comparative analysis. American Journal of
Distance Education, 7(2), 58-72.
Wegner, S.B., Holloway, K.C., & Garton, E.M. (1999). The effects
of Internet-based instruction on student learning. Journal of Asynchronous
Learning Networks, 3(2). [http://www.aln.org/alnweb/journal/Vol3_issue2/Wegner.htm]
Wideman, H., & Owston, R.D. (1999). Internet-based courses at Atkinson
College: An initial assessment. [http://www.edu.yorku.ca/irlt/reports/techreport99-1.htm]
Young, J.R. (2002, March 22). "Hybrid" teaching seeks to end the divide
between traditional and online instruction. Chronicle of Higher Education,
48(28), A33. [http://chronicle.com/free/v48/i28/28a03301.htm]