JCMC 6 (1) September 2000
Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet
NetStudy VEs VOs O-Journ HigherEd Conversation Cyberspace Web Commerce VisualCMC
Comparative Response to a Survey Executed by Post, E-mail, & Web Form
Gi Woong Yun
School of Journalism and Mass Communication
The University of Wisconsin, Madison
Craig W. Trumbo
Department of Life Sciences Communication
The University of Wisconsin, Madison
AbstractRecent developments in communication technologies have created alternative survey methods through e-mail and Web sites. Both methods use electronic text communication, require fewer resources, and provide faster responses than traditional paper and pencil methods. However, new survey methodologies also generate problems involving sampling, response consistency and participant motivation. Empirical studies need to be done to address these issues as researchers implement electronic survey methods.
In this study we conduct an analysis of the characteristics of three survey response modes: post, e-mail, and Web site. Data are from a survey of the National Association of Science Writers (NASW), in which science writers' professional use of e-mail and the Web is evaluated.
Our analysis offers two lessons. First, a caution. We detect a number of potentially important differences in the response characteristics of these three groups. Researchers using multi-mode survey techniques should keep in mind that subtle effects might be at play in their analyses. Second, an encouragement. We do not observe significant influences of survey mode in our substantive analyses. We feel, at least in this case, that the differences detected in the response groups indicate that using multi-mode survey techniques improved the representativeness of the sample without biasing other results.
IntroductionThe authors of this article are involved in a larger research project that provides a vehicle for the exploration of some of the methodological issues in electronic survey research. The larger project involves an examination of how journalists-specifically science writers-are making use of e-mail and the Web to go about the task of newsmaking (Trumbo, et al., 1999). To investigate this question, the membership of the National Association of Science Writers (NASW) was surveyed on their use of e-mail and the Web. A wide variety of questions was asked, including items involving length of e-mail use, task and social patterns of e-mail use, perceptions of the Web as an innovation, favorability toward the Web as a newsmaking tool, as well as demographic and innovator characteristics of the respondents. The analyses we present here look at how these variables differ as a function of the means by which the survey was completed: paper, e-mail, or Web form.
Before presenting the details of the study and its findings, we will first explore the literature on electronic survey methods. Since this is becoming a substantial and important literature, we explore a number of topics in some detail. We look at the relative cost of paper versus electronic surveys, issues of sampling and response rates, multiple contact strategies, quality of response data, geographical advantages, ethical issues and some technical problems of electronic surveys. We then note a few unique aspects of Web surveys.
It might be worthwhile to open this discussion in terms of electronic versus surface mail methods (we do not treat phone surveys in this article, but recognize that a comparison of online and phone methods is without question warranted). Tse summarized six advantages of using e-mail surveys compared to traditional mail methods: e-mail is cheaper, it eliminates tedious mail processes, it is faster in transmission, it is less likely to be ignored as junk mail, it encourages respondents to reply, and it can be construed as environmentally friendly (Tse, 1998). The list of advantages might grow, considering the rapid development of the Internet and Web technology. However, even the obvious advantages may have hidden costs, and researchers need to consider all the subtleties of these new methods.
Cost. One of the major advantages often claimed for electronic surveys is the minimal cost. A number of researchers have suggested that e-mail surveys cost less than mail surveys (Bachmann & Elfrink, 1996; Kiesler & Sproull, 1986; Parker, 1992; Schaefer, 1998; Sproull, 1986). It is true that electronic surveys reduce paper waste. However, Bachmann and Elfrink (1996) point out that hard-to-estimate human labor costs can easily be overlooked when calculating the true costs of e-mail surveys.
Watt (1999) reports a detailed examination of the cost of the electronic survey method compared to other survey methods. He reported that the typical cost per 10,000 respondents is $0.65 for a Web survey, and $1.64 for an e-mail survey-with both methods costing less than a surface mail survey. Watt also provided graphs comparing the cost of performing a Web Common Gateway Interface (CGI) survey1 , a non-CGI Web survey, a mail survey, an e-mail survey, a telephone Computer Aided Telephone Interviewing (CATI) survey and a non-CATI telephone survey. The evidence he presents confirms other researchers' findings that the costs of e-mail and Web surveys dramatically decrease as the sample size increases.
Additionally, Watt's analysis focuses on the possible cost barriers of implementing the Web CGI survey. He points out that not only is the initial cost for a Web CGI survey higher than other survey methods, but the overall cost of the Web CGI survey decreases less steeply as sample size increases. As a matter of fact, the cost of the Web CGI survey method is described as more expensive than the cost of the mail version of the survey if population size is held constant. This is due to the substantial amount of time spent by programmers and Web designers making a seamless Web survey site. In addition to these costs, there is a difficult-to-calculate individual labor cost for maintaining computer networks (e.g., administration and hardware maintenance).
Representativeness and response rate. In most of the work examining e-mail surveys concerns are voiced about sample representativeness (e.g., Dillman, 2000; Schaefer & Dillman, 1998; Swoboda et al., 1997; Tse, 1998). This is a legitimate concern, especially considering that many survey populations are geographically and demographically diverse. Specifically, Tse (1998) expresses concern that e-mail sampling is necessarily limited to e-mail users. Mehta (1995) expresses concern that e-mail respondents over-represent the middle-to-upper class respondent. Schmidt (1997) points out that the population of Web users is biased toward young males of above-average socio-economic and educational status, which is a source of concern with respect to the use of Web forms.
These concerns, although legitimate. do not always receive the attention they deserve. For example, published research has used e-mail addresses obtained from an Internet newsgroup to generalize to the population at large (Swoboda et al., 1997). Even though such studies may demonstrate that overall population attitudes and demographics are adequately represented in such a sample method, this argument is not sufficient to defend against the ecological fallacy. Bachmann and Elfrink (1996) make this point as they argue that it is unwise to generalize from studies using such restricted sampling frames.
In addition to an appropriately random and representative sample of the target population, survey response rates are a major concern for the validity of survey results. Here, electronic methods may offer some advantages. For example, 41% of Swoboda et al.'s (1997) electronic survey respondents said that they would not have completed a telephone interview on the survey topic. This suggests that when researchers implement a multi-mode survey, they may be able to generate responses from a greater range of individuals and boost response rates. Since e-mail survey respondents appear to be less likely to respond to surveys in other modes, researchers may be able to access those respondents only through e-mail-and we might thus assume that mixed mode surveys can provide wider sample coverage.
While overall response rates for e-mail surveys are known to be somewhat lower than paper and pencil surveys (Anderson & Gansneder, 1995; Kittleson, 1995), some e-mail surveys perform well beyond expectations. Indeed, it has been shown that e-mail survey response rates can be as high as 70%. Some attribute this success to a group cohesiveness effect sometimes inherent to e-mail sampling frames (similar to organizational sampling frames) (Sproull, 1986; Kiesler & Sproull, 1986). For instance, an e-mail survey of AT&T employees had a 60% response rate compared to a 38% response for an identical mail survey (Parker, 1992). And an e-mail survey of Lotus Development Corporation employees achieved a 56% response rate (Bachmann & Elfrink, 1996; Schaefer & Dillman, 1998). Also, Kiesler and Sproull's e-mail survey of college of students achieved a 67% response rate (Kiesler & Sproull, 1986). And an online survey of 300 SCIENCEnet subscribers achieved a 76% response rate (Walsh, Kiesler, Sproull & Hesse, 1992). Mehta (1995) summarizes that, except for the use of $1 incentives included in mail surveys, e-mail surveys with pre-notice and follow-up prompts can generally achieve higher response rates.
In constrast to the results noted above are a few studies describing low e-mail survey response rates. Schuldt and Totten (1994) reported a 19% electronic response rate compared to a 57% mail response. Couper, Blair, and Triplett (1997) achieved a 43% electronic response rate versus 71% from mail. And Swoboda et al. (1997) received a 21% response rate from a global e-mail survey.
Kittleson (1995) indicated that he could not achieve a satisfactory response rate even among active e-mail users, and argued that the US Postal Service is the best way of getting a reasonable response rate. He suggests several possible reasons for low e-mail response rates. First, asynchronous e-mail is on the "waiting phase" and individuals can discard these messages very easily. Further, e-mail surveys do not physically show up on recipient's desks and thus are less likely to get the receiver's attention. And, perhaps most importantly, e-mail is not anonymous. Although current Listserv software supports functions which can make the returning e-mails anonymous, this function can prohibit researchers from making an e-mail list for the second wave or for follow-up e-mail surveys.
Despite some sub-optimal results for email response rates, recent trends in Internet demographics paint a positive future for the use of this channel in survey work. In recent Internet demographics released by CommerceNet and Media Metrix, McPhee and Lieb report that the female population of the Web has increased from 30% in 1995 to 46% in 1999. This normalization of the gender ratio on the Web is of critical importance (McPhee & Lieb, 1999). Media Metrix also reports similar normalization in terms of age. An older generation is increasingly connected to the Net. As of December 1999, 20% of the online population was between age 45 and 64 - a 1.2 % increase from the previous year (during which the representation of other age groups held steady or declined slightly) (Media Metrix, 2000). While rapidly changing Internet demographics may provide better representativeness for online surveys, and some researchers even report nearly-complete interchangeability (e.g., Bruzzone, 1999), biases in online survey respondents' socio-political and cultural preferences are still issues that remain to be explored.
Mixed mode and multiple contacts. Schaefer (1998) argues that e-mail can be a cheap and fast method for pre-testing. Researchers can start with e-mail and move to progressively more expensive methods for non-respondents until an acceptable response level is reached. Others also report successfully overcoming population coverage errors by using mixed mode methods. In fact, mixed mode approaches might increase respondents' motivation because people may appreciate being able to choose their response mode (Dillman, 2000; Schaefer & Dillman, 1998).
As is well known, with traditional survey methods multiple contacts improve response rates. Similarly, e-mail survey response rates may only reach 25% to 30% without follow-up e-mail (Kittleson, 1995). Many researchers have obtained relatively good response rates by using multiple e-mail contacts. For example, Smith (1997) had a 5.3% higher response rate with e-mail when using multiple contacts. Mehta and Sivadas (1995) had a higher response rate with four contacts, and Schaefer and Dillman (1998) received more responses by increasing contact frequency.
However, other researchers report mixed results on the effects of third or fourth contacts by e-mail. For example, Kittleson (1997) noted that the second follow-up e-mail doubled the response rate but the third or fourth e-mails had only marginal effects. Other researchers have had different results. Heberlein and Baumgartner's (1978) study showed that each e-mail follow-up increased response rate, and Isaac and Michael (1990) also showed increases from third and fourth contacts.
Researchers have also been concerned about the timing of follow-up e-mail. Anderson and Gansneder (1995) followed Dillman's (1978) dictum that mail follow-up should be sent at one, three, and seven weeks from the initial mailing date. However, they argue that considering the much faster delivery speed of e-mail, researchers should send follow-up e-mail one week earlier than recommended for traditional mail surveys. In addition to timing issues, Dillman (1991) reports the effect of personalized e-mail. The personalized letter contains the specific subject's name, reinforcing the fact that each subject is important to the survey. In the case of e-mail, Schaefer and Dillman (1998) also recommended against the use of a listserver because such lists are impersonal and susceptible to responses accidentally sent to the entire list. In addition to the effect of follow-up contacts, Schaefer and Dillman also report that for an e-mail survey, an e-mail pre-notice is more effective than a regular mail pre-notice.
Quality of response data. Variation of data among survey modes is an issue for both the electronic survey and the multi-mode approach. Some researchers provide evidence that the quality of the e-mail survey is somewhat different from the paper and pencil survey, specifically that e-mail surveys have more non-response items (Bachmann & Elfrink, 1996; Sproull, 1986). But other researchers argue that there is minimal difference between these approaches (King & Miles, 1995; Tse, 1998), and that e-mail methods generate fewer non-response items than a paper and pencil version does (Schaefer & Dillman, 1998).
However, when it comes to the quality of open-ended responses, there is general consensus. A number of researchers (notable exception: Kiesler & Sproull, 1986) have reported that respondents write lengthier and more self-disclosing comments on e-mail open-ended questionnaires than they do on mail survey questionnaires (Bachmann & Elfrink, 1996; ; Kiesler & Sproull, 1986; Locke & Gilbert, 1995; Schaefer & Dillman, 1998; Sproull, 1986). For example, Schaefer (1998) attained a four-fold increase in length of open-ended responses using electronic methods, and Lock and Gilbert's (1995) study showed greater self-disclosure in electronic returns. This might be due to the speed of typing over handwriting (Bachmann & Elfrink, 1996), but no study has carefully investigated this question.
Another interesting issue of response quality involves the social desirablability effect. Here again there is some disagreement. Some researchers report that computerized surveys increase socially desirable answers and reduce respondents' self-disclosure (Davis & Cowles, 1989; Lautenschlager & Flaherty, 1990; Schuldberg, 1988). But other researchers claim that the computerized survey produces less socially desirable responses on closed-ended questionnaires (Kiesler & Sproull, 1986; Sproull, 1986). And, furthermore, some researchers propose that computerized surveys can induce more interest and greater awareness in respondents (Booth-Kewley et al., 1992; Kiesler & Sproull, 1986; Kiesler, Siegel & McGuire, 1984; Kiesler, Subrow, Moses & Geller, 1985; Martin & Nagao, 1989). For example, Kiesler and Sproull (1986) explained that electronic survey respondents are more likely to be self-absorbed and uninhibited when they complete a survey by computer - and may concentrate more on the questionnaire.
Response speed and geographical advantage. Electronic surveys provide a faster reaction time than mail surveys. Many studies have reported that most of their e-mail responses arrive within two to three days following the initial e-mail contact (Bachmann & Elfrink, 1996; Kittleson, 1995; Mehta & Sivadas, 1995; Sproull, 1986; Schaefer & Dillman, 1998). In a mixed-mode project, Schaefer and Dillman (1998) reported that e-mail responses arrived before the first completed paper surveys were returned.
Some researchers have implemented the e-mail survey on a global scale (Kiesler & Sproull, 1986; Parker, 1992; Sproull, 1986). For example, Swoboda et al. (1997) performed a world-wide e-mail survey. While achieving only a 20% response rate, they did receive responses from all parts of the world (90% of them within four days) and demonstrated that English-language e-mail surveys can easily overcome national barriers. Further, e-mail users from developing nations can access e-mail as fast as those in developed nations. If the target population includes e-mail users living in remote places, e-mail is clearly the best communication method to gather data quickly.
Ethical concerns. E-mail research raises many ethical concerns because unsolicited e-mail invades a person's private space. Sending too many e-mail messages will bother some people. Unsolicited e-mails are often considered rude, and senders of such email may be seen as lacking an appreciation of "netiquette" (Swoboda et al., 1997). Also, sending a long e-mail survey may be inconsiderate when it could incur costs on the receiver's end (e.g., downloading charges). Increasingly, market research is being done with opt-in e-mail lists provided by e-mail list brokers. This kind of list may be convenient and subject-specific, but the researcher should carefully evaluate the advantage of this type of list and the potential invasion of the e-mail user's privacy. 2
In fact, abusing the e-mail survey may paradoxically damage the e-mail survey environment. For instance, Parker's (1992) successful e-mail survey response rate was claimed to be the due to the novelty of e-mail. He and others have ominously predicted that the increase of junk e-mail will reduce response rates just as mail and telephone rates have been similarly affected (Dillman, 2000; Parker, 1992; Schaefer & Dillman, 1998; Tse, 1998).
Technical problems. All of the above discussion is based on the assumption that the electronic communication technology works seamlessly. However, the real world is often uncooperative, and there are some technical pitfalls that survey researchers need to be aware of when they use electronic survey methods.
Another potential problem for electronic surveys is that of multiple submissions (Schmidt, 1997). This may not be a great problem for the casual e-mail survey. However, if the researcher designs the survey with an anonymous response function it is almost impossible to detect multiple submissions. The problem is even more complex with the Web survey. Researchers may assume that Web site visits from the same IP address are from a single person, but in reality it may not be true (Smith, 1997). Researchers can sometimes use methods such as Cookies or provide a unique identity to each respondent.
Finally, non-delivered e-mail must be a concern. Swoboda et al. (1997) point out that non-delivered e-mail is not always brought to the sender's attention (in as many as 28% of cases). However, researchers can still get the most out of returned undelivered e-mails if they find alternative methods to reach those individuals (Schaefer & Dillman, 1998).
Special aspects of Web surveys. While many of the above-mentioned concerns also apply in this domain, the development of the Web as a survey research tool offers a number of advantages. If a survey uses the Web, respondents can benefit from the automatic question filtering function. In addition to making the survey experience smoother for the respondent, researchers also realize benefits from this technology. For example, Stanton (1998) reports obtaining less missing data from a Web survey since the program can skip irrelevant questions - much in the same way a CATI system can. In this sense, the Web survey can also lessen the respondent's cognitive load to some degree. However, Beebe et al. (1997) caution that the automatic filtering system may be problematic for multi-mode surveys because different layouts are presented.
While good graphics have long been held as important in the paper survey, and e-mail surveys have suffered on this point, it is the Web survey that can truly approach (or surpass) the graphic sophistication of the well-designed paper survey. Three issues stand out. First, researchers can present color images and texts, and also present cinematic and interactive images. For example, Bishop's (1997) Web survey involving landscape picture perception used color images, demonstrating one of the advantages of the graphically enabled Web survey. Second, Stanton (1998) points out that the Web survey has the advantage of being more strict in how it allows answers. In traditional mail and e-mail surveys, respondents can write or type whatever they want in the marginal space of the paper or in the e-mail reply. However, the Web survey typically does not allow this freedom (other than in the open-ended question screen). Third, Schmidt (1997) points out that the Web survey is a dynamic survey. For instance, a Web site can dynamically provide statistical results of the survey (even on a daily or hourly basis). Or, researchers can design the Web site to provide feedback to the respondents' answers.
Taken together,these recent developments in electronic survey methods clearly offer a wealth of potential advantages - as well as a new host of things for survey researchers to add to their list of worries. In addition to the issues we have reviewed above, we also feel that attention needs to be given to the question of differences in respondents' characteristics across response modes, and the question of whether or not a multi-mode approach enhances representativeness (and if so, at what cost). We turn to these questions now.
MethodsThe survey we report here was set up as both a traditional paper mailer and as an e-mail survey; in addition, respondents were able to complete the survey on a Web site if they preferred. Survey mailings were timed so as to not favor one means of response over another - all options were activated at about the same time. The sample consisted of a random selection of 360 of the approximately 900 active members of the NASW who listed e-mail addresses in the 1998 membership directory (selected by a simple random start and skip method). Advance postal letters were sent on March 22, 1999. Data collection was completed in mid-May, 1999. Eventually, subjects in our survey were contacted four times. The first contact was the pre-notice mail (4/9/99). Second, we sent a paper survey that also contained a hyperlink to the Web survey (4/16/99). Third, subjects received the e-mail version of the survey, which also contained the Web survey URL as a hotlink (4/22/99). We timed our mailings so that the paper questionnaires would arrive just before the e-mails. The fourth contact was a follow-up post card (sent 4/23/99). We carefully scheduled mailings and e-mailings based on the literature reviews. Our method turned out to be successful. No other mailings were needed after the fourth contact because the response was rapid and robust. An overall (adjusted) response rate of 72% was achieved, with 230 completed questionnaires returned over the period of about one month (162 paper, 33 e-mail, and 35 Web).
As mentioned above, we asked a wide range of questions. But a set of the questions seemed to be especially relevant to the issue of paper versus electronic data collection. First, we asked about volume of e-mail use: "Approximately how many e-mail messages do you send and receive in a typical day?" We also asked about frequency of professional contact with scientists: "In a typical month, how many scientists do you communicate with concerning science stories?" We also asked if the respondent was a freelancer or a staff person of some variety, the year in which the respondent joined NASW, and level of education.
Additionally, we asked several sets of questions that constituted five indices of respondents' orientations toward professional use of the Web. We describe these indices next.
Individual Connectedness. In our conceptualization of this variable, we combined standardized scores on five diffusion-based questions that roughly capture the degree to which the individual is plugged-in or networked-what Rogers (1995) calls cosmopolitanism: frequency of contact with other NASW members, frequency of contact with scientists, frequency of travel, number of media outlets monitored, frequency asked for professional advice. This characteristic of the individual is considered to be a key component of the person's innovativeness, and is also directly relevant to communication patterns that may be associated with orientation toward the Web. Unfortunately, this five-item index was the weakest of the set, with an alpha reliability of only .55 (and no improvement offered by reconfiguration). Nonetheless, results associated with this variable were interesting, and are offered provisionally.
Social E-Mail. Following previous work (Steinfield, 1986), we also asked about the prevalence of social use of e-mail. This index, with an alpha of .79, consists of six questions about frequency of various e-mail uses (1 = never to 7 = five or more times per day): take a break, fill time, keep in touch, social entertainment, organize social events, gather hobby-related information.
Task E-Mail. Also following Steinfield and the same response options, a set of seven questions was asked to measure task use of e-mail: finding sources for stories, scheduling meetings, making appointments, doing research on story topics, monitoring stories under submission, keeping a record of communications, following professional trends, and getting feedback on ideas. This index has a .77 reliability coefficient.
Trust of Web information. We also wanted to tap into trust of Web-based information. With responses on a 1-5 trust-distrust scale, we asked how respondents trusted Web-based information from universities, industry, citizen groups, laypersons, individual scientists, government, and non-profit special interests. This index has a .71 reliability coefficient.
Favorability toward the Web. Finally, a set of questions were asked to evaluate the respondent's "favorability" toward the Web as a professional tool (the dependent variable in our analysis). These eight questions (on a 1-5 agree-disagree scale) were conceptualized to capture characteristics of innovations, again as provided for in the diffusion literature: "Using the Web puts me at a professional advantage; Using the Web has not fundamentally changed how I work; I am very experienced with the Web; The Web is a great tool to understand complex scientific details that are new to me; I was quick to begin using the Web in my work; Other science writers regularly use the Web in their work; If they were aware, other science writers would approve of my use of information from a Web site in a story; My editor(s) or superior(s) are comfortable with information from the Web being used in a story." This index has a .72 reliability coefficient.
Our analysis followed a very simple strategy. First we asked if there were significant differences in the mean values of these variables across the three modes of response (as well as across the paper v. electronic contrast). Second, we ask if these variables performed differently in the paper and electronic groups with respect to their ability to predict our dependent variable: favorability toward the Web.
Before looking at response differences, we note that an interesting effect was observed in the timing of the e-mail and Web responses. Over 80% of the electronic responses were collected within three days after the initial e-mail was sent out (Figure 1). The largest number of responses occurred on the day we sent the e-mail. However, some Web survey respondents completed the survey before our e-mail was delivered. Respondents were able to obtain the Web site URL from the postal survey questionnaire, which was intended to arrive before the e-mail version.
While we allowed for this, we were still surprised. Although the Web site was active, we did not expect respondents to access it before receiving the e-mail with the hotlinked URL. Clearly, some respondents are willing to complete the Web survey by typing in the URL taken from the paper questionnaire. This is encouraging for the use of Web sites.
Turning to the analysis of response mode, Table 1 shows the differences in mean values across the groups, with ANOVAs and t-tests. We see that seven of the 10 variables presented significant ANOVAs, with similar results in the t-tests. Social use of e-mail, trust in the Web, and years of tenure in the NASW did not vary across the three response groups (if only held to an alpha of .10, social use of e-mail does present a significant difference between electronic and paper responders, with electronic responders reporting more social use).
But we did see some interesting contrasts. Electronic responders scored higher on the connectedness index, as well as on the associated measures of education and number of scientists contacted per month. They also reported a larger volume of e-mail, and higher levels of task-related use of e-mail. There was little difference in the level of trust of Web-based information, but the Web respondents were more favorable toward the Web as a professional tool (perhaps the voice of experience). Employment status was an important difference in the groups, with Web respondents more likely to be staff persons rather than freelancers.
But what of these contrasts, did they make any difference in our analysis of favorability toward professional use of the Web? First, favorability was regressed on the other nine variables. An R-square of .24 was achieved with these variables presenting betas significant at p <.05: task use of e-mail (beta = .23), trust in Web information (-.29), education (.11), year having joined NASW (.13), and freelance (0 = staff, -.14). Then a set of indicator variables for return mode was included in a second block. None of the three indicators (in any paired configuration) were significant, and together only incremented R-square by .01 (p = .15).
Additional variable-by-variable evidence that our substantive analysis was not affected by multiple response modes is shown in Table 2. For this analysis, only the paper versus electronic distinction was explored (due to the number of cases in each group). Using the combined dataset, we tested for differences in slopes and intercepts across the two models. This analysis of covariance uses a dummy variable for the return type and an interaction term between the dummy and each independent variable (Agresti & Finlay, 1986: 455-460). Again, the dependent variable was favorability toward the Web, and the regression was on the set of nine predictor variables. First, we saw nearly identical R-squares in all equations. Further, analysis of the differences in the individual slopes and intercepts between the two regressions indicated that only one beta was statistically different: we observed a significant difference (p < .05) in the slope between paper and electronic returns for Web favorability regressed on the number of scientists contacted.
How did our experience with the administration of a multimode survey compare to those experiences reported in the literature? There are several points on which we can comment.
Cost comparisons in our project are somewhat difficult to make. Because we did a full mailing as well as made use of electronic delivery the overall costs of the survey were not reduced, but rather were inflated. A considerable time investment was required to establish and administer the Web site as well as the e-mail. But as many university researchers do, we invested our own labor in this effort. Further, as university researchers we (and others) enjoy Internet support privileges that cover many of the otherwise expensive technical demands of the electronic survey.
Sampling presented less of an issue in terms of representativeness in this survey compared to those of a more general population. Virtually all of the members in our sampling frame listed both postal and electronic addresses. One of the benefits that we clearly enjoyed was a boost in the response rate, and a quicker response. A third of our responses were electronic. Had many of those respondents balked at returning the postal survey we would have had a substantially reduced response rate, or the expense of additional mailings.
We believe that our contact strategy was especially effective, with the timing of all modes coordinated. The provision of the Web site URL in the advance mailing is clearly recommended, as a number of the respondents immediately completed the survey that way. Without question, this survey population had a high degree of interest in the topic of the survey, and there was likely some novelty or "techie" effect that played in our favor. In terms of the quality of the responses, there were few observable differences between the modes (and it was an added benefit that the Web responses did not need to be key entered into the dataset). To a minor degree, we did see that some of the e-mail returns came back from an address other than that to which they were sent. For a few such returns, some follow-up effort was required for accurate tracking of respondents. Finally, while we did not ask for any open-ended responses, it was the e-mail respondents who provided some nonetheless.
In more substantive terms, the conditions associated with the use of electronic survey return modes - high connectedness, more education, greater number of scientists contacted per month, greater volume of e-mail use and more task-related e-mail- suggested some intriguing ramifications of this research. This suggests that the group of respondents who returned surveys electronically may have had different characteristics than the postal survey respondent group.
It was not a surprise that respondent characteristics differed in the variables that are explicitly and implicitly related to the use of communication technology. But this result does raise concerns that the response rate and representativeness of the electronic survey group may be skewed relative to the target population. Many of the articles we reviewed used only the electronic survey mode. Our results indicate a significant relationship between electronic survey response mode and high use of electronic communication technology overall. For this reason, researchers must still be concerned about the social and economic representativeness of online samples, at least until online participation becomes more ubiquitous.
In a larger sense, it is critically important that work continues on the representativeness problem as survey researchers migrate to online data collection. In the meantime, it may be advisable to use all three survey modes when possible, especially when the target population is a large public and when all three modes of data collection are possible. Nonetheless, the electronic-only survey is advisable when resources are limited and the target population suitsan electronic survey. Considering the potential of the electronic survey method, it is only a matter of time before it becomes the major survey method.
In summary, we believe that our analysis offers two lessons. First, a caution. We detect a number of potentially important differences in the response characteristics of these three groups. Researchers using multi-mode survey techniques should keep in mind that subtle effects might be at play in their analyses. Second, an encouragement. We do not observe significant influences of survey mode in our substantive analyses. We believe, at least in this case, that the differences detected in the response groups indicate that using multi-mode survey techniques improved the representativeness of the sample, without biasing other results.
1. Watt differentiates between a Web CGI survey and a "normal" Web survey. While a Web survey is implemented by typically non-programmer friendly software, it still does not need a programmerís labor for the CGI script writing. The advantage of the Web CGI survey is the freedom of expression in survey format, including sound, graphics, video, etc.2. Some researchers even provide computers, or Web TV, to gather Internet usage data in conjunction with the online survey data. This method can create a major privacy invasion and it needs a careful ethical evaluation.
3. A discussion of the rationale, strengths, and weaknesses for these indices in elaborated in Trumbo, et al., 1999.
ReferencesAgresti, A., & Finlay, B. (1986). Statistical methods for the social sciences. San Francisco, CA: Dellen Publishing Co.
Andreson, S. E., & Gansneder, B. M. (1995). Using electronic mail surveys and computer monitored data for studying computer mediated communication systems. Social Science Computer Review, 13(1), 33-46.
Bachmann, D., & Elfrink, J. (1996). Tracking the progress of e-mail versus snail-mail. Marketing Research, 8(2), 31-35.
Beebe, T. J., Mika, T., Harrison, P. A., Anderson, R. E., & Fulkerson, J. A. (1997). Computerized school surveys. Social Science Computer Review, 15(2), 159-169.
Bishop, I. D. (1997). Testing perceived landscape color-difference using the Internet. Landscape and Urban Planning, 37 (3-4):187-196.
Booth-Kewley, S., Edwards, J. E., & Rosenfeld, P. (1992). Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference? Journal of Applied Psychology, 77, 562-566.
Bruzzone, D. (1999). The top 10 insights about the validity of conducting research online, [World Wide Web]. Advertising Research Foundation. Available: http://www.arf.amic.com/Webpages/onlineresearch99/LA_99_top10.htm [2000, May 25].
Buchanan, T., & Smith, J. L. (1999). Using the Internet for psychological research: Personality testing on the World Wide Web. The British Journal of Psychology, 90(Feb), 125-144.
Couper, M. P., Blair, J., & Triplett, T. (1997). A Comparison of mail and e-mail for a survey of employees in federal statistical agencies. Paper presented at the American Association for Public Opinion Research, Norfolk, VA.
Davis, C., & Cowles, M. (1989). Automated psychological testing: Method of administration, need for approval, and measures and anxiety. Educational and Psychological Measurement, 49(311-320).
Dickson, J. P., & D.L., M. (1996). Fax surveys: return patterns and comparison with mail surveys. Journal of Marketing Research, 33, 108-113.
Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York: Willey-Interscience.
Dillman, D. A. (1991). The design and administration of mail surveys. Annual Review of Sociology, 17, 225-249.
Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method. New York: John Wiley & Sons.
Dillman, D. A., Christenson, J. A., Carpenter, E. H., & Brooks, R. (1974). Increasing mail questionnaire response: A four-state comparison. American Sociological Review, 39, 744-756.
Dillman, D. A., Sangster, R. L., Tarnai, J., & Rockwood, T. H. (1996). Understanding differences in peopleís answers to telephone and mail surveys. New Directions for Evaluation, 70, 45-61.
Dillman, D. A., & Tarnai, J. (1998). Administrative issues in mixed mode surveys. In R. M. Groves (Ed.), Telephone survey methodology (pp. 509-528). New York: Wiley.
El-Shinnawy, M., & Markus, M. L. (1997). The poverty of media richness theory: Explaining peopleís choice of electronic vs. voice mail. International Journal of Human-Computer Studies, 46, 443-467.
Fox, R., Crask, M., & Jonghoon, K. (1988). Mail survey response rates. Public Opinion Quarterly, 52, 467-491.
Gotcher, M. J., & Kanervo, E. W. (1997). Perceptions and uses of electronic mail: A function of rhetorical style. Social Science Computer Review, 15(2), 145-158.
Heberlein, T. A., & Baumgartner, R. (1978). Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review, 43, 447-462.
Hewson, C. M., Laurent, D., & Vogel, C. M. (1996). Proper methodologies for psychological and sociological studies conducted via the Internet. Behavior Research Methods, Instruments and Computers, 28, 186-191.
Isaac, S., & Michael, W. (1990). Handbook in research and evaluation: For education and the behavioral sciences. (Second ed.). San Diego, CA: EdiTS Publishers.
Kiesler, S., Siegel, J., & McGuire, T. (1984). Social psychological aspects of computer-mediated communications. American Psychologist, 39, 1123-1134.
Kiesler, S., & Sproull, L. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402-413.
Kiesler, S., Zubrow, D., Moses, A. M., & Geller, V. (1985). Affect in computer-mediated communication. Human-Computer Interaction, 1, 1985.
King, W., & Miles, E. (1995). A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurements: A test of measurement equivalence. Journal of Applied Psychology, 80, 643-651.
Kittleson, M. J. (1995). An assessment of the response rate via the Postal Service and e-mail. Health Values, 19(2), 27-39.
Kittleson, M. J. (1997). Determining effective follow-up of e-mail surveys. American Journal of Health Behavior, 21(3), 193-196.
Krantz, J. H., Ballard, J., & Scher, J. (1997). Comparing the results of laboratory and World Wide Web samples of the determinants of female attractiveness. Behavior Research Methods, Instruments and Computers, 29, 264-269.
Kraut, A. (1996). An overview of organizational surveys. In K. Al (Ed.), Organizational surveys (pp. 1-17). San Francisco: Jossey-Bass.
Krysan, M., Scheman, H., Scott, L. J., & Beatty, P. (1994). Response rates and response content in mail versus face-to-face surveys. Public Opinion Quarterly, 58, 381-399.
Lantz, A. (1998). Heavy users of electronic mail. International Journal of Human Computer Interaction, 10(4), 361-379.
Lautenschlager, G. J., & Flaherty, V. L. (1990). Computer administration of questions: More desirable or more social desirability? Journal of Applied Psychology, 75, 310-314.
Loke, S. D., & Gilbert, B. O. (1995). Method of psychological assessment, self disclosure, and experiential differences: A study of computer, questionnaire and interview assessment formats. Journal of Social Behavior and Personality, 10(255-263).
Martin, C. L., & Nagao, D. H. (1989). Some effects of computerized interviewing on job applicant responses. Journal of Applied psychology, 74, 72-80.
McPhee, L., & Lieb, J. (1999). Internet users top 92 million in the U.S. and Canada (#99-26). Cupertino, CA: CommerceNet.
Mead, A., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449-458.
MediaMetrix. (2000). U.S. baby boomers and seniors are fastest growing Internet demographic Group, [World Wide Web]. Media Metrix. Available: http://www.mediametrix.com/usa/press/releases/20000404.jsp [2000, June 28].
Mehta, R., & Sivadas, E. (1995). Comparing response rates and response content in mail versus electronic mail surveys. Journal of the Market Research Society, 37(4), 429-439.
Parker, L. (1992). Collecting data the e-mail way. Training and Development, 52-54.
Rogers, E. M. (1983; 1995). Diffusion of innovations. New York: The Free Press.
Schaefer, D. R., & Dillman, D. A. (1998). Development of standard e-mail methodology: Results of an experiment. Public Opinion Quarterly, 62(3), 378-397.
Schiano, D. J. (1997). Convergent methodologies in cyber-psychology: A case study. Behavior Research Methods, Instruments and Computers, 29, 270-273.
Schmidt, W. C. (1997). World-Wide Web survey research: Benefits, potential problems, and solutions. Behavior Research Methods, Instruments and Computers, 29, 274-279.
Schuldberg, D. (1988). The MMPI is less sensitive to the automated testing format than it is to repeated testing: Item and scale effects. Computer in Human Behavior, 4(285-298).
Schuldt, B. A., & Totten, J. W. (1994). Electronic mail vs. mail survey response rates. Marketing Research, 6(1), 36-39.
Scott, C. (1961). Research on mail surveys. Journal of the Royal Statistical Society, 124, 143-205.
Smith, M. A., & Leigh, B. (1997). Virtual subjects: Using the Internet as an alternative source of subjects and research environment. Behavior Research Methods, Instruments, and Computers, 29, 496-505.
Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-169.
Stanton, J. M. (1998). An empirical assessment of data collection using the Internet. Personnel Psychology, 51(3), 709-726.
Steinfield, C. W. (1986). Computer-mediated communication in an organizational setting: Explaining task-related and socioemotional uses. In M. L. McLaughlin (Ed.), Communication Yearbook 9 (pp. 777-804). Beverly Hills, CA: Sage.
Swoboda, S. J., Muehlberger, N., Weitkunat, R., & Schneeweiss, S. (1997). Internet surveys by direct mailing: An innovative way of collecting data. Social Science Computer Review, 15(3).
Torabi, M. (1991). Factors affecting response rate in mail survey questionnaires. Health Values, 15(5), 57-59.
Trumbo, C. W., Sprecker, K., Yun, G. W., Dumlao, R., & Duke, S. (1999). Use of e-mail and the Web by science writers: A longitudinal comparison, 1994-1999. Paper presented to the Association for Education in Journalism and Mass Communication, New Orleans (August).
Tse, A. (1998). Comparing the response rate, response speed and response quality of two methods of sending questionnaires: E-mail vs. mail. Journal of the Market Research Society, 40(4), 353-361.
Tse, A. (1994). A comparison of the effectiveness of mail and facsimile as survey media on response rate, speed and quality. Journal of the Market Research Society, 36, 349-355.
Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.
Watt, J.H. (1999). Internet systems for evaluation research. In G. Gay and T.L. Bennington (Eds.), Information technologies in evaluation: Social, moral, epistemological, and practical implications (pp. 23-44). San Francisco, Jossey-Bass.
Webster, J., & Compeau, D. (1996). Computer-assisted versus paper-and-pencil administration of questionnaires. Behavior Research Methods, Instruments and Computers, 28, 567-576.
Williams, E. (1999). E-mail and the effect of future developments. First Monday(4-8).
Witmer, D. F., Colman, R. W., & Katzman, S. L. (1999). From paper-and-pencil to screen-and-keyboard: Toward a methodology for survey research on the Internet. In S. Jones (Ed.), Doing internet research: Critical issues and methods for examining the net (pp. 145-161). Thousand Oaks, CA: Sage Publications, Inc.
About the AuthorsGi Woong Yun is pursuing a Ph.D. degree in the School of Journalism and Mass Communication at the University of Wisconsin, Madison. His research interests are media effect, Internet research methodology and health communication. He is also working as a research assistant at the Center for Health Systems Research and Analysis.
Address: Room 1236 WARF BLD. 610 Walnut Street Madison, WI 53705
Craig W. Trumbo (Ph.D. 1997, University of Wisconsin-Madison) is an assistant professor in the department of Life Sciences Communication and a member of the Institute for Environmental Studies at the University of Wisconsin-Madison. His research interests include risk communication, media performance on issues involving science, and the social psychology of resource conservation.
Address: 440 Henry Mall, University of Wisconsin, Madison, WI 53706.
©Copyright 2000 Journal of Computer-Mediated Communication