In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people's searching behavior in the Web. Its purpose is to understand the various factors that influence people's judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and postsearch interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment, and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design that will effectively support people's judgments of quality and authority are also discussed.
In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people's searching behavior in the Web. Its purpose is to understand the various factors that influence people's judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and post-search interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design which will effectively support people's judgments of quality and authority are also discussed. IntroductionOne of the advantages of searching in the Web is its grant of access to a great amount and a wide variety of information. As a result, however, people need some ways to reduce the large amount of information in order to select the information that they want. In traditional information retrieval, this problem has long been discussed within the context of "topical relevance"; that is, in terms of whether the topic of the query matches the topic of a document. However, a substantial number of empirical studies (e.g. Barry, 1994;Cool, Belkin, Frieder, & Kantor, 1993;Park, 1993;Schamber, 1991;Spink & Greisdorf, 2001;Wang & Soergel, 1999) have revealed that people use much more diverse criteria than mere topicality to make relevance judgments in the traditional information retrieval environment. This study will Rieh 2 take these findings a step further by focusing on two factors which appear consistently across the previous studies: quality and authority. These two factors were chosen because it is believed that they may be more important relevance criteria than any other criteria identified in the previous studies, especially in a large uncontrolled environment, such as the Web.The concepts of quality and authority are not new. On the one hand, a number of studies of relevance criteria, particularly in the 1990s, identified various aspects of both concepts including "goodness" (Cool et al.), "usefulness" (Cool et al.), "accuracy/validity" (Barry), "recency" (Barry; Wang & Soergel), "perceived quality (Park), "actual quality" (Wang & Soergel), "expected quality" (Wang & Soergel), "authority" (Cool et al.; Wang & Soergel), and "reliability" (Schamber). On the other hand, in recent years, the notions of quality and authority have been discussed with respect to evaluation criteria of Web pages by examining different approaches and implementations. Librarians and researchers in library and information science (e....
IntroductionThis chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment-the Internet, television, newspapers, schools, libraries, bookstores, and social networks-abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996).Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.