ERIC Identifier: ED470202
Publication Date: 2002-07-01
Author: Shannon, David M. - Johnson, Todd E. - Searcy, Shelby - Lott,
Source: ERIC Clearinghouse on Assessment and Evaluation College
Survey Professionals Using Electronic Surveys. ERIC Digest.
While an extensive body of literature exists regarding the principles of
survey design and the factors influencing response to mail and telephone
surveys, the applicability of this literature to electronic surveys is not yet
clear. We know, for example, the importance of the first question in mail and
telephone surveys, as well as the grouping and sequencing of questions,
establishing a respondent-pleasing vertical flow of items in the survey, and
having clear, specific directions. The literature also shows the importance of
prenotification of respondents, personalized cover letters, incentives, return
postage, and multiple contacts to reach respondents and generate higher response
rates. Are these factors also important in electronic surveys?
This Digest summarizes the results of a survey administered to the American
Educational Research Association's Survey Research Special Interest Group
regarding the use of electronic surveys and discusses their responses within the
context of the existing literature base. Topics addressed include conditions
under which the use of e-mail or web-based surveys would be most appropriate,
sampling issues, weaknesses of the approach, and guidelines for other
researchers who plan to use email or the Internet for survey research projects.
Use of Electronic Mail and the Internet
Overall, the sample participants (n=63) reported frequent use of, and a high
level of confidence in, using electronic mail and the Internet. Ninety percent
reported using email every day, and 78% reported using the Internet at least 5
days per week. They reported being very confident in their ability to compose
and respond to messages, send messages to more than one person, and send
attachments. They were also confident about their ability to use the Internet to
find a web address, use a search engine, and download information. The only area
in which these participants expressed a concern was creating and maintaining a
GENERAL PERCEPTIONS OF ELECTRONIC SURVEYS
was asked to respond to 33 Likert-scale items pertaining to the use of email or
web-based surveys. Six of these items were reverse-coded so that a higher score
would consistently reflect a more favorable attitude toward the use of email or
web-based surveys. Internal consistency reliability (Cronbach's alpha) was
estimated at .83. Overall, participants responded favorably to statements
regarding the use of email-or web-based surveys.
These survey professionals were most positive about the reduction of costs
(e.g., postage, phone charges) associated with electronic surveys, the use of
electronic mail for prenotification or follow-up purposes as a complement to
other survey delivery methods, and the compatibility of data with existing
software programs. They also indicated a belief that the lack of a tangible
reward would not prevent individuals from responding and that they would likely
respond to a web-based survey if all they had to do was click on the HTML
address from an email message.
The bulk of the less favorable responses pertained to concerns about
respondents' knowledge of, and experience with, technology. They believed that
individuals who were not comfortable with technology would not respond. In
addition, they indicated beliefs that electronic surveys are less personalized
than traditional mail surveys, that people will make more mistakes when
responding, that their responses will be influenced by issues of social
desirability, and that they will not complete as many items as they might have
in a pencil-and- paper survey. Finally, the survey researchers expressed a need
for passwords to access web-based surveys along with concerns that respondents
would not be likely to respond to sensitive issues, or might not even respond at
all, due to fears about anonymity.
In a few areas, the opinions of the survey professionals were balanced; in
other words, they agreed and disagreed in almost equal numbers. These items
included the comparability of response rate and reliability estimates for
electronic and mail surveys, the extent to which people prefer hard copies of
surveys or find electronic surveys more interesting, and the appropriateness of
listserves as a sampling source for electronic surveys.
Consistent with prior literature (Bowers, 1999;
Crawford, Couper & Lamias, 2001; Eaton, 1997; Kaye & Johnson, 1999;
Kiessler & Sproull, 1986; Weissbach, 1997), the primary concerns expressed
by survey professionals in this study regarded sampling issues. These concerns
regarded a sample's access to and ability to use the required technology, their
authenticity, and their privacy.
Samples with access to the Internet have not typically represented the
general population (GVU, 1998; Sheehan & Hoy, 1999). However, the Internet
is becoming increasing more accessible to the general population: approximately
41.5% of U.S. households now have access, an increase of 58% in less than two
years (U.S. Department of Commerce, 2000). Access is still more frequent among
those who live in urban areas, with higher incomes and higher levels of
education. However, the most rapid increases in access are occurring in rural
areas, among individuals with some college experience, and individuals over 50
(U.S. Department of Commerce, 2000). The increase in Internet access and
reliable e-mail addresses will allow for a greater range in future samples.
Researchers must also recognize that samples will vary a great deal in their
technological capability, both in terms of equipment and respondent knowledge
and skill. This variation must be kept in mind when designing electronic
surveys. Although web-based surveys allow for much more innovative features than
plain text e-mail surveys, respondents may have difficulty accessing such a
survey. Further, most people are not accustomed to the process used to respond
to an electronic survey (e.g., selecting from a pull-down menu, clicking a
radial button, scrolling from screen to screen) and will need specific
instructions that guide them through each questions and the manner in which they
should respond. Survey professionals recommend that samples be prenotified via
e-mail to determine the technological capacity of the sample and their
willingness to participate in the study. This will help ensure that the survey
will be accessible to members in the sample and help prevent the perceptions of
"spamming" that might occur due to continued unsolicited e-mail messages (Mehta
& Sivadas, 1995; Sheehan & Hoy, 1999). The communication should be
personalized and provide for the essential elements of mailed cover letters,
including a clear overview of the study's purpose, motivation to respond,
assurances of confidentiality and privacy, and a person to contact with
questions. A recent meta-analysis of electronic survey studies found
personalized prenotification and number of contacts to influence response rate
(Cook, Heath, & Thompson, 2000).
Once samples are identified and prenotified, they nee to be protected in
terms of their authenticity, confidentiality, and privacy. Measures should be
taken to reduce sampling error. Access to web-based surveys must be limited to
the targeted sample; unrestricted sample surveys that allow anyone access are
unacceptable. Whereas many unscientific online polls boast large samples, there
is often little or no attempt to ensure the quality and validity of such
samples. Samples must be clearly defined and authenticated. Researchers should
consider using passwords or PIN numbers to control for sampling error and
establish credible samples (Bowers, 1999; Bradley, 1999; Dillman, Tortora, &
Bowker, 1998). If passwords or PIN numbers are not used, responding samples
should be carefully examined. Those not eligible should be eliminated to
maintain consistency with the sampling plan and yield credible results.
Additional precautions must be taken to protect respondents' privacy and
ensure the confidentiality of their responses. Several researchers have
experienced negative feedback from respondents regarding privacy issues (Couper,
Blair, & Triplett, 1997; Mehta & Sivadas, 1995; Sheehan & Hoy,
1999). In analyzing server logs from electronic surveys, Jeavons (1998) found
that individuals stopped completing surveys when their email address was
requested. Minimally, researchers should make assurances of confidentiality in
the prenotification e-mail (Couper, Blair, & Triplett, 1997; Kiesler &
Sproull, 1986; Schaeffer & Dillman, 1998). Further protection of
respondents' privacy can be provided by separating e-mail addresses upon receipt
of the completed surveys (Sheehan & Hoy, 1999) or programming the return to
include the researcher's email address, not that of the respondent (Shannon
& Bradshaw, 2000). Using secure servers and encryption methods affords
additional protection of respondents' privacy.
In conclusion, web-based electronic surveys must use principles of sound
survey design. Research studies must also focus on the adaptability of such
principles for electronic survey formats so that survey professionals can take
full advantage of the benefits of such surveys without sacrificing the integrity
of their data and placing respondents at risk in terms of confidentiality and
Bowers, D.K. (1999). FACS on online research.
Marketing Research, 10 (1): 45-48.
Bradley, N. (1999). Sampling for Internet surveys: An examination of
respondent selection for Internet research. Journal of the Market Research
Society, 41 (4): 387-395.
Cook, C., Heath, F., & Thomson, R. (2000). A meta-analysis of response
rates in web- or Internet-based surveys. Educational & Psychological
Measurement, 60 (6): 821- 826.
Couper, M.P. Blair, J., & Triplett, T. (1997). A comparison of mail
versus email for surveys of employees in federal statistical agencies. Paper
presented at the annual meeting of the American Association for Public Opinion
Research, Norfolk, VA.
Crawford, S.D., Couper, M.P., & Lamias, M.J. (2001). Web surveys:
Perception of burden. Social Science Computer Review, 19, 146-162.
Dillman, D. A., Tortora, R. D., & Bowker, D. (1998). Principles for
constructing web surveys: An initial statement. (Technical Report No. 98-50).
Pullman, WA: Washington State University Social and Economic Sciences Research
Eaton, B. (1997). Internet surveys: Does WWW stand for "Why waste the work?"
Marketing Research Review, June/July, Article 0244. Available:
GVU's 10th WWW User Survey (October, 1998). General Demographic Summary.
Jeavons, A. (1998). Ethology and the Web: Observing respondent behavior in
Web surveys. Proceedings of the Worldwide Internet Conference, Amsterdam:
ISOMER. Available: http://w3.one.net/~andrewje/ethology.html.
Kaye, B. K., & Johnson, T. J. (1999). Research methodology: Taming the
cyber frontier. Social Science Computer Review, 17, 323-337.
Kiesler, S., & Sproul, L.S. (1986). Response effects in the electronic
survey. Public Opinion Quarterly, 50, 402-413.
Mehta, R., & Sivadas, E. (1995). Comparing response rates and response
content in mail versus electronic mail surveys. Journal of the Market Research
Society, 37 (4): 429-439.
Schaeffer, D. R., & Dillman, D. A. (1998). Development of standard e-mail
methodology: Results on an experiment. Public Opinion Quarterly, 62 (3):
Shannon, D. M., & Bradshaw, C. C. (2002). A comparison of response rate,
speed and costs of mail and electronic surveys. Journal of Experimental
Education, 70 (2), in press.
Sheehan, K. B., & Hoy, M. G. (1999). Using e-mail to survey Internet
users in the United States: Methodology and assessment. Journal of Computer
Mediated Communication, 4 (3). Available:
U.S. Department of Commerce (2000, October). Falling through the net: Toward
digital inclusion. Washington, DC: Author.
Watt, J. H. (1997). Using the Internet for quantitative survey research.
Marketing Research Review, June, Article 0248. Available: http://www.Quirks.com
Weissbach, S. (1997) Internet research: Still a few hurdles to clear.
Marketing Research Review, June/July, Article 0249. Available: