JCMC 6 (4) July 2001
Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet
NetStudy VEs VOs O-Journ HigherEd Conversation Cyberspace Web Commerce
Vol. 6 No. 1 Vol. 6 No. 2 vol. 6 No. 3
Credibility Assessments of Online Health Information: The Effects of Source Expertise and Knowledge of Content
Matthew S. Eastin
School of Journalism and Communication
The Ohio State University
- Credibility Assessments of Online Health Information:
The Effects of Source Expertise and Knowledge of Content
- About the Author
AbstractMillions of Americans use the Internet as a resource for information, with a large proportion seeking health information. Research indicates that medical professionals do not author an extensive amount of health information available on the Internet. This creates a possibility for false information, thereby potentially leading ill people away from proper care. One way to begin addressing this problem is to assess perceptions of credibility about information found online. A between-groups, 2 (message type) x 3 (source type) factorial design was tested by manipulating source expertise (high, medium, low) and content knowledge (known and unknown). While findings did not indicate a significant interaction between source and content type, they did indicate an overall tendency to rate all information as relatively credible. In addition, results indicate that both knowledge of content and source expertise affect perceptions of online health information.
Credibility Assessments of Online Health Information:As media have become a primary source for public information, media credibility has received increased scrutiny, with the majority of research comparing television and newspapers (Gaziano & McGrath, 1986; Newhagan & Nass, 1988; Wanta & Hu, 1994; West, 1994). There is a trend in this group of studies to find that television is considered more credible and is used more as a source for information than newspapers. While this body of literature explains credibility with regard to traditional media, future research needs to investigate how audiences using the mass media's newest information source for health-related content, the Internet, determine credibility.
The Effects of Source Expertise and Knowledge of Content
Today, traditional providers of news are subject to training as well as social and professional pressures to provide accurate and unbiased information, giving people a sense that what they see is credible. Conversely, the Internet's free and unregulated flow of information and information providers creates many possible hazards to those who seek and trust online information. In many instances those accessing information are unaware of: (1) who authored the material, (2) when the information was last updated, and (3)whether the information is accurate. As the Internet's information seekers and providers continue to increase, it becomes important that researchers gain an understanding of how this information is being perceived. Therefore, this study assessed two key factors concerning how people determine the credibility of online information. The relationship between topic knowledge, source expertise, and perceived credibility of the message was investigated. In this evaluation, the level of source expertise (i.e., high, moderate, and low), and content knowledge (i.e., an unknown topic and a known topic) were manipulated.
Assessing Credibility on the Internet
According to the Pew Research Center (2001), approximately 104 million American adults have access to the Internet. Of these, there are slightly more females (51%) on the Internet than males (49%), of both high and low socioeconomic status. As the Internet's audience continues to grow and subsequently mirror the general population, understanding how people use it to obtain medical information becomes more important to both users and providers.
When considering the Internet, one distinct feature pertaining to the flow of online information must be understood; unlike traditional media, the Internet has no government or ethical regulations controlling the majority of its available content. This unregulated flow of information presents a new problem to those seeking information, as more credible sources become harder to distinguish from less credible sources (Andie, 1997). Moreover, without knowing the exact URL of a given site, the amount of information offered through keyword searches can make finding a predetermined site difficult as well as increase the likelihood of encountering sites containing false information (Andie, 1997).
Presently, the medical community has become increasingly aware of and concerned about the credibility of health information available on the Internet (The Pew Research Center, 2000; Wright, 1998). A growing number of patients are seeking medical advice everywhere from chat rooms to personal Web pages (Donald, Lindenberg & Humphreys, 1998; Dudley et. al., 1996). In an early attempt to assess medically-driven online activity, Donald et al. (1998) evaluated two medically-oriented Internet sites, MEDLINE and Grateful Med. During a yearlong evaluation, 75 million searches occurred within these sites. To gauge who was seeking information, a voluntary two-item general questionnaire was administered for one week; it indicated that 30 percent of respondents were either students or from the general public. More recently, the Pew Research Center (2000) reported that 55 percent of Internet users access the Internet in search of medical information. Of these, 21 million indicated that they had been influenced by health information they read online. That is, of those influenced by online medical information, 70 percent said that their decision to treat an illness or condition was influenced by online sources, while 28 percent reported that online information influenced their decision about whether or not they should see a doctor.
These data, accompanied by previously mentioned increases in online use, suggest a large number of people use the Internet to gather medical information. Moreover, this activity brings several issues to the foreground: (1) Who are the people using the Internet for health information? (2) Can the information obtained online be trusted? And finally, (3) How do people seeking medical information online determine what sources are credible?
Addressing these issues, Culver, Gerr, and Frumkin (1997) evaluated messages retrieved from a medical chatroom discussion group. Of the 1,658 messages evaluated, people without professional experience authored 89 percent, and one-third of the messages were found to be inconsistent with conventional medical practices. Also analyzing chatroom content, Feldman (1998) documented the leader of a chat room who pretended to have cancer in order to gain attention. In that case, after gaining information by viewing others' discussions, she began to pass the information off as her own.
The medical community has tried to locate and rate Web-based health information sites. However, it has achieved little success. (Donald et al., 1998). One of the main obstacles confronting this process is the nature of Web publishing. Donald et al., (1998) note that many documents on the Web "lack basic information about the origin, authorship, or age of the material they provided" (p.1304). Moreover, when considering the ease of creating and changing a Web page, together with the very large number of Web publishers, the task of monitoring all medical information becomes insurmountable. Thus, the task of assessing the credibility of information obtained online rests with the user. In order to understand how people perceive credibility on the Internet, it is first essential to understand how more traditional media research (i.e., television and print) has explored the issue of credibility.
Early credibility research on media can be traced back to Hovland & Weiss' (1951) research on communication and persuasion. They found that the "trustworthiness" of a source significantly affected acceptance of the message and changes in opinion. Significantly related to trustworthiness were reactions to "fairness" of presentation and "justifiability" of the conclusions. Since these original studies, many variables have been examined to assess source credibility. (Adoni, Cohen, & Mane, 1984; Gaziano & McGrath, 1986; Meyer, 1988; Slater & Rouner, 1996). Perceived expertise, bias, fairness, truthfulness, accuracy, amount of use, depth or completeness of message, prior knowledge and message quality have all emerged as components of credibility.
Recognizing the large number of indicators and dimensions used to assess the construct of credibility, West (1994) validated two frequently used scales of credibility, by Gaziano and McGrath (1986) and Meyer (1988). The five dimensions created in the Meyer (1988) study were found to be: "...valid and reliable measures of credibility; they consisted of fairness, bias, depth, accuracy, and trustworthiness." (West, 1994, alpha = .92, p. 163). These five indicators, as well as 12 others, were listed in the Gaziano and McGrath (1986) scale. When evaluating which items to include, it must be determined whether credibility of the source or of the stimuli (content) is being measured, and whether credibility should be measured from the receiver's perspective or from the source's attributes. (Rubin, Palmgreen, & Sypher, 1994). That being said, this study identified items related to the content from the receiver's perspective.
While these as well as other dimensions have been found to be significant components in understanding perceived credibility, there are other factors that have helped to gain understanding of how people perceive credibility, such as age, education, amount of use, reliance and medium (Greenberg, 1966; Johnson & Kaye, 1998; Wanta & Hu, 1994). Although earlier studies had indicated that television was a more credible information source than newspapapers for most people, a more recent evaluation of media credibility indicated that people consider information obtained online to be as credible as television, radio, and magazine information, but not as credible as information in newspapers (Flanigan & Metzger, 2000).
Relatively speaking, the Internet has only recently become part of mainstream America. Thus, it is important to begin evaluating perceived credibility of the Internet by incorporating Internet-specific characteristics. To date, research on Internet credibility has focused on four areas: media comparisons, demographics, medium reliance, and message characteristics. For example, Brady (1996) and Johnson & Kaye (1998) found information about political candidates on the Web was perceived to be just as credible as information on television.
Assessing demographics, Johnson and Kaye (1998) found gender the only variable significantly correlated with perceived credibility. Females found the Web more credible and trustworthy than males. Although not significant, age and education were negatively correlated with perceived credibility of online newspapers, online candidate literature, and politically-oriented sites. Results also indicated that levels of medium reliance and use were positively correlated with perceived credibility. Reliance on the Web for political information was a stronger predictor than use of perceived credibility.
In an effort to evaluate the dynamics of online source and content, researchers have attempted to assess whether or not individuals use heuristic cues when source credibility is limited or questioned within online chatrooms (Franke, 1996; Slater & Rouner, 1996). Results indicate that most discussions are too brief or unusual to be considered usable dialogue, which suggests that actual Web pages will make a more profitable platform from which to evaluate perceptions of message credibility.
More recently, Rieh & Belkin (1998) identified criteria used when evaluating online information. Specifically, using in-depth interviews with 14 scholars, they assessed perceptions of information quality and authority of information. For example, they found that: (1) institutional sites were seen as more credible than individual sites, and (2) accuracy of content was used to assess online information. Respondents used knowledge of citations within the content and the functionality of hyperlinks as cues to evaluate the information. Similar variables were indicated in a report focusing on assessing the quality of online health information. (Agency for Health Care Policy and Research, 1999). In addition to source and link accuracy, they also recommend that users consider peer evaluation, navigability, and feedback options (i.e., email, chat room, etc.).
While these Internet-based studies have helped to bridge the gap between traditional and online media research by offering insight into perception of the Internet, researchers have not addressed the issue of how people using the Internet to gather information assess the credibility of content provided. Therefore, an area within the credibility literature that needs to be explored within an online context is content type.
Persuasion and Context
Persuasion scholars have shown how various elements of messages such as language intensity, style, attractiveness, and quality all effect message perceptions (Adoni et al., 1984; Chartprasert, 1993; Hamilton, 1998; Petty & Cacioppo, 1979; Slater & Rouner, 1996). In addition, knowledge of content has been identified as one of the more significant factors impacting heuristic cues used when evaluating a message (Eagly & Chaiken, 1993; O'Keefe, 1990). Research on persuasive messages has found that knowledge mediates receivers' ability to evaluate messages critically. (Eagly & Chaiken, 1993). When receivers are highly knowledgeable, their ability and motivation to process are greater, which heightens their critical interpretation.
In this context, the persuasion literature suggests that respondents' processing heuristic cues establish expectancies about the message validity. For example, Cozzens and Contractor (1987) evaluated how people evaluate message content when the subject matter was familiar. They demonstrated that active audience members evaluating a mediated message used outside understanding (i.e., knowledge from a personal source) of the issue to evaluate the believability of the message rather than of the source. This suggests individuals assess the content of a message based on extrinsic information
In addition to prior knowledge of content, limited knowledge of source competence as well as low involvement with the subject matter also cause respondents to seek message-inherent heuristic cues (i.e., presentation quality, language intensity or style, attractiveness, and subjectivity) apart from the source to evaluate the message. Similar to content knowledge, cues embedded in the message become characteristics with which to evaluate the message validity. For example, as personal relevance increases or where knowledge of source credibility is limited, respondents become more motivated to process issue-relevant content (Eagly & Chaiken, 1993; Petty & Cacioppo, 1988). This factor, together with the aforementioned findings, underscores the importance of understanding that while the source of a message is a commonly used attribute to assess the perceptions of credibility, other content-driven variables can also affect message perceptions and thus should be considered.
This study builds on these findings by varying source expertise and content knowledge within an online environment. To gain needed understanding of how Internet users perceive message credibility, this study, utilizing items found in Gaziano & McGrath, (1986), Meyer, (1988) and Johnson and Kaye (1999), evaluated perceived credibility of online information. As suggested, people publishing information online vary in expertise, and past credibility research suggests that expertise is a main predictor of perceived credibility (Newhagen & Nass, 1988; Slater & Rouner, 1996). Therefore, using types of sources commonly found on the Internet, this study manipulated expertise into three levels. Specifically, a linear expertise continuum consisting of three levels of expertise was established. Level 1 consists of a highly rated source; Level 2 consists of a moderately rated source; and Level 3 consists of a source considered not at all an expert.
Many studies also suggest that content type can mediate source effects (Austin & Dong, 1994; Brady, 1996; Slater & Rouner, 1998). Therefore, borrowing from previous research (Cozzens & Contractor, 1987; Rieh & Belkin, 1998), this study manipulated: (1) information about one unknown topic; and (2) information about one known topic. The content-type manipulation helps to provide an understanding of the relative importance of Web page source and message content. Research suggests that when participants are knowledgeable about the subject matter, they are more apt to attend to the message, rather than to the source (Adoni et. al., 1984).
It therefore seems logical that when people have little knowledge of online content, they will perceive information attributed to a highly expert source as more credible than information from a less expert source. Likewise, information from a moderately expert source should be perceived as more credible than a source low in expertise. When people are highly knowledgeable about message content, the effects of source expertise will be mediated (i.e., attenuated). From this, it is hypothesized that:
H1 There will be a significant interaction between source expertise and knowledge of content. That is, for the topic where subject matter knowledge is low, information attributed to moderate and low expert sources will be perceived as less credible than identical information attributed to a highly expert source. For the topic where subject matter knowledge is high, source expertise will have no significant effect on perceived credibility.
A total of 125 students from two introductory communication courses at a large midwestern university were used to conduct a between-groups, 2 (message type) x 3 (source type) factorial design experiment. Participants were randomly assigned to one of the six conditions. Upon assignment, participants were instructed to assess a health-related Web site they were told was pre-selected from the Internet earlier that day.(Appendix A). They were then instructed this information would be used to write a report for class. While students are considered convenient and non-representative, such a sample for this project was useful as it represents the largest population of Internet users. However, it should be noted that the largest group of online health information seekers are adults. (Pew Research Center, 2000).
Design and Procedure
Each participant viewed a Web site containing one of two types of information. Specifically, this Web-based information related to one of two topics: (1) an unknown health topic syphilis; and (2) a known health topic, HIV. Using information obtained from the Centers for Disease Control and Prevention, a pretest indicated twelve facts about syphilis that at least 90 percent of the population did not know,1 and twelve facts about HIV that at least 90 percent of the population were aware of (n=78). Prior research (Petty & Cacioppo, 1988) suggests that relevancy can also affect message perception; therefore, a 10-point Likert type item was used to assess the relevancy of syphilis (M = 8.38) and HIV (HIV, M = 8.13) (t = .167, p = .868). These findings suggest that both topics were perceived as highly personally relevant.
Also manipulated was the putative author of each Web site. Using10-point Likert-type items, a pretest using the HIV topic identified three significantly different sources consistent with those available online (F(2,171) = 1,261.93, p < .000). "Dr. William Blake -- HIV specialist" was identified as an expert source (M = 9.74, SD = .26); "Esther Smith -- Widow of an AIDS victim" was identified as a moderately expert source (M = 5.62, SD = 1.22); and "Tim Alster- a high school freshman" was identified as a low expert source (M = 1.72, SD = .73). All three credibility means were found to be significantly different from one another at p < .001 using the Scheffé test. These sources were also applied to the syphilis condition. Each source was prominently displayed on all Web pages. (Both doctors and high school students have commonly been used as high and low credible sources (O'Keefe, 1990)).
The dependent measure was perceived credibility of the content. Media credibility has been measured with several different indicators, most of which suggest credibility is a multidimensional construct. (Gaziano & McGrath, 1986; Meyer, 1988). Moreover, Gaziano and McGrath (1986) suggest the variables used to measure credibility influence perceptions, thus, what is evaluated should drive the items used. This study measured perceived credibility of the message with three commonly identified items oriented toward the content of the information: accuracy, believability, and factualness (a = .89) (Gaziano & McGrath, 1986; Johnson & Kaye, 1998; Meyer, 1988). These indicators were measured with Likert-type items ranging from a score of 1 (e.g., not at all believable) to a score of 5 (e.g., very believable). (Appendix B).
ResultsOf those who participated, 56 percent were male, 44 percent were female, and the mean age was 20 years. Of these, 73 percent were Caucasian, 8 percent were African American, 13 percent were Asian, and 2 percent were Latino. The effects of expertise and content knowledge on perceived credibility of the message were analyzed using a 2 X 3 factorial design.
Hypothesis 1 predicted a significant interaction between source expertise and knowledge of content; this relationship was not supported (F (2,124) = .624, p = .538). (see Table 1). However, Table 1 provides strong evidence that: (1) the source made a difference, i.e., the source means are such that the highly credible source was significantly more credible from the low credible source, with the moderate source not different from either of the two, and (2) knowledge of content made a difference, such that message credibility was significantly higher for the topic which the subjects were knowledgeable about, in comparison with the topic they were relatively ignorant about. Thus, the two manipulated variables operated separatedly, as the literature suggested they would, but in combination, no significant interaction was obtained.
Table 1. Interaction of Perceived Message Credibility by Source Expertise and Content Type.Testing for source differences for the two topics separately (Table 2) determined that source credibility made a significant difference for the unknown topic, F(2,60) = 3.370, p = .041, but not for the known topic, F (2,61) = .514, p = .601. This further clarifies the locus of the difference obtained in Table 1.
Table 2. Perceptions of Message Credibility of Unknown and Known content by Source Expertise.
*Using the Scheffe test, these means were found to be significantly different.
DiscussionWhile no significant interaction was found between knowledge of content and source expertise (Hypothesis 1), the findings did suggest that source expertise (and/or age and/or gender) and knowledge of content (and/or order effects) affect perception of message credibility. This suggests that when people evaluate online health information, the heuristic cues they attend to vary, depending on the subject matter.
The fact that these findings resulted from an online study presents several interesting issues. First, knowing that when people are evaluating unfamiliar information they use the source of the message to judge its credibility helps to alleviate concerns that users are not separating reliable from unreliable online sources. However, concerns echoed by Donald et al. (1998) regarding the ease of publishing information online become more salient. To reiterate, anyone can publish and thus disseminate online information under any name, reducing the reliability of putative source as a usable heuristic cue.
This relationship presents a quandary. While this study reports the presumed source of a web page is used to determine information credibility, the unregulated nature and ease of publishing on the Internet suggest this might be an unreliable heuristic. (Agency for Health Care Policy and Research, 1999; Rieh & Belkin, 1998). In the study reported here, the web sites published for each condition were all attributed to a variety of fictitious authors, yet they were all published online and readily available for any user who came across them.
It should be noted that while the unknown information did differ in perceived credibility by source expertise, all of the information presented was perceived as more credible than not (i.e., ceiling effect). Content judgments about content attributed to the expert source (M = 12.10), the moderate expert source (M = 11.50), and non-expert source (M = 10.59) were all above the overall midpoint of the credibility scale (M = 9:00). (see Table 2). Given this, it is suggested that while this study furthers understanding as to how people judge information online, there are perhaps other key variables that elevate perceptions of online message credibility. One variable often evaluated in credibility studies that could play a key role in perception of online content is dynamism. Dynamism has been found to influence perceptions of the message source when assessing credibility (Hamilton, 1998). When a message/presentation of a message is found to be highly dynamic, perceptions of source credibility are elevated. While the sites presented in this study were consistent in style, it could be argued that a highly dynamic site could cause the respondents to perceive the content as more credible. In addition, the format or layout of the site could also raise perceived credibility. (Hamilton, 1998). These possible influencing factors should be investigated by examining how dynamism and layout design affect perceived credibility of online information.
Moreover, the fact that all information was given some level of credibility, regardless of the putative author of the content, points to potential online information management problems. Meta-sites containing links to peer-evaluated and accredited sites would help to alleviate this problem. This type of site would provide users with a central location to find various organizations and individuals without the worry of receiving false information
Ideally, assessing two different topics would help determine that the findings are robust to topic-specific effects. Here, no main effects between two unknown topics and perceived credibility and the two known topics and perceived credibility would be expected; future research should consider this limitation. Researchers should further assess these findings by testing attitude change. This would provide an understanding as to whether or not online information is influential in changing attitudes and what specific variables affect this relationship. Furthermore, comparing findings across media will also enable researchers to isolate the relative importance of each variable by medium. Finally, perceptions of information found online should be framed within the digital literacy literature. This would allow researchers to determine if novice and experienced users are identifying, accessing, and using information found online differently, perhaps affording researchers and policy makers an understanding of how to best educate users.
The effects of content knowledge and source expertise on perceived credibility are complex and indirect, and no single generalization can encompass their impact. However, the present study does represent a step toward understanding how people perceive online health information. The data suggest that people use both the source and the content to assess the credibility of online information. Moreover, the ceiling effect found in credibility perceptions suggests that further research is needed to understand how people process information found on the Internet. Internet use will continue to rise and the number of people obtaining health information online will also increase. Couple this with the lack of information structure currently found on the Internet, as well as findings from the current study, and the need to understand information seekers becomes clearer. While the Internet is undoubtedly a valuable source of information, without some form of structure or educational intervention, its potential to help inform could be greatly diminished.
ReferencesAdoni, H., Cohen, A., & Mane, S. (1984). Social reality and television news: Perceptual dimensions of social conflicts in selected life areas. Journal of Broadcasting, 28, 33-49.
Agency for Health Care Policy and Research. (1999). Criteria for assessing the quality of health information on the Internet -- policy paper. Retrieved Feburary, 2000, from the World Wide Web: http://www.ahcpr.gov/data/infoqual.htm
Andie, T. (1997). Why web warriors might worry. Columbia Journalism Review, 36(2), 35-39.
Austin, E. W., & Dong, Q. (1994). Source v. content effects on judgment of news believability. Journalism Quarterly, 71, 973-983.
Brady, D. (1996). Cyberdemocracy and perceptions of politics: An experimental analysis of political communication on the World Wide Web. Paper presented at the annual meeting of the Midwest Associates for Public Opinion Research, IL.
Chartprasert, D. (1993). How bureaucratic writing style affects source credibility. Journalism Quarterly, 70, 150-159.
Center for Disease Control (2000). Retrieved November, 2000, from the World Wide Web: http://www.cdc.gov
Cozzens, M., & Contractor, N. (1987). The effects of conflicting information on media skepticism. Communication Research, 14, 437-451.
Culver, J. D., Gerr, R., & Frumkin, H. (1997). Medical information on the Internet: A study of an electronic bulletin board. Journal of General Internal Medicine, 12(8), 466-70.
Donald, A., Lindenberg, B., & Humphreys, L. (1998). Medicine and health on the Internet: The good, the bad, and the ugly. Journal of American Medical Association, 280(15), 1303-1306.
Dudley, T. E., Falvo, D. R., Podell, R. N., & Renner, J. (1996). The informed patient poses a different challenge. Patient Care, 30(16), 128-138.
Eagly, A., & Chaiken, S. (1993). The psychology of attitudes. Orlando, FL: Harcourt Brace Jovanovich College Publishers.
Feldman, M. D. (1998). Chat room analysis: Virtual fictitious disorders. The Western Journal of Medicine, 168, 537-539.
Flanigan, A., & Metzger, M. (2000). Perceptions of Internet information credibility. Journalism & Mass Communication Quarterly, 77(3), 515-540.
Franke, G. (1996). Participatory political discussion on the Internet. Votes and Opinions, 2, 22-25.
Gaziano, C., & McGrath, K. (1986). Measuring the concept of credibility. Journalism Quarterly,63, 451-462.
Greenberg, B. S. (1966). Media use and believability: Some multiple correlates. Journalism Quarterly, 43(4), 667-670.
Hamilton, M. (1998). Message variables that mediate and moderate the effect of equivocal language on source credibility. Journal of Language and Social Psychology, 17, 109-143.
Hovland, C., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15, 635-650.
Johnson, T. J., & Kaye, B. K. (1998). Cruising is believing?: Comparing Internet and traditional sources on media credibility measures. Journalism & Mass Communication Quarterly, 75(2), 325-341.
Meyer, P. (1988). Defining and measuring credibility of newspapers: Developing an index. Journalism Quarterly, 65, 567-574.
Newhagen, J., & Nass, C. (1988). Defining and measuring credibility for evaluating credibility of newspapers and TV news. Journalism Quarterly, 66, 277-284.
Okeefe, D. (1990). Persuasion. Newbury Park, CA: Sage Publications.
Petty, R. E., & Cacioppo, J.(1988). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123-203.
Petty, R. E., & Cacioppo, J. (1979). Issue involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses. Journal of Personality and Social Psychology, 37, 1915-1926.
Pew Research Foundation (2001). The changing online population: It's more and more like the general population. Retrieved May, 2001 from the World Wide Web: http://www.pewinternet.org
Pew Research Center. (2000). The online health care revolution: How the Web helps Americans take better care of themselves. Retrieved June 5, 2001, from the World Wide Web: http://www.pewinternet.org
Rieh, S. Y., & Becklin, N. J. (1998). Understanding judgment of information quality and cognitive authority in WWW. Journal of the American Society for Information Science, 35, 279-289.
Rubin, R., Palmgreen, P., & Sypher, H. (1994). Communication research measures: A source book. New York: The Guilford Press.
Slater, M. D., & Rouner, D. (1996). How message evaluation and source attributes may influence credibility assessment and belief change. Journalism & Mass Communication Quarterly, 73(4), 974-991.
Wanta, W., & Hu, Y. (1994). The effects of credibility, reliance, and exposure on media agenda setting: A path analysis model. Journalism Quarterly, 71(1), 90-98.
West, M. D. (1994). Validating a scale for the measurement of credibility: A covariance structure modeling approach. Journalism Quarterly, 71(1), 159-168.
Wright, J. B. (1998). Quality connection on the Internet. Healthcare Executive, 13, 44-45.
About the Author
Matthew S. Eastin will be an Assistant Professor in the School of Journalism & Communication at the Ohio State University beginning fall 2001. He has a Ph.D. in Mass Media from Michigan State University. His research focuses on the uses and social effects of information technologies.
©Copyright 2001 Journal of Computer-Mediated Communication