Abstract-Anew approach to the solicitation and measurement of relevance judgments is presented, which attempts to resolve some of the difficulties inherent in the nature of relevance and human judgment, and which further seeks to examine how users' judgments of document representations change as more information about documents is revealed to them. Subjects (university faculty and doctoral students) viewed three incremental versions of documents, and recorded ratio-level relevance judgments for each version. These judgments were analyzed by a variety of methods, including graphical inspection and examination of the number and degree of changes of judgments as new information is seen. A post questionnaire was also administered to obtain subjects' perceptions of the process and the individual fields of information presented. A consistent pattern of perception and importance of these fields is seen: Abstracts are by far the most important field and have the greatest impact, followed by titles, bibliographic information, and indexing.
This article discusses the history and emergence of nonlibrary commercial and noncommercial information services on the World Wide Web. These services are referred to as "expert services," while the term "digital reference" is reserved for library-related on-line information services. Following suggestions in library and information literature regarding quality standards for digital reference, researchers make clear the importance of developing a practicable methodology for critical examination of expert services, and consideration of their relevance to library and other professional information services. A methodology for research in this area and initial data are described. Two hundred forty questions were asked of 20 expert service sites. Findings include performance measures such as response rate, response time, and verifiable answers. Sites responded to 70% of all questions, and gave verifiable answers to 69% of factual questions. Performance was generally highest for factual type questions. Because expert services are likely to continue to fill a niche for factual questions in the digital reference environment, implications for further research and the development of digital reference services may be appropriately turned to source questions. This is contrary to current practice and the emergence of digital reference services reported in related literature thus far. Background and Purpose
The emerging user‐centric model of relevance proposes that the only valid measure of relevance of a document to a user's information need is the one made by that user. If we accept this proposition, it raises an interesting question: how well do other people, especially those involved in information work who make such judgments as part of their training and work, perform as judges of documents for information needs they did not originate? This question was empirically tested, using three groups of subjects: incoming students to a school of information/library science, continuing students in that school, and academic librarians (holders of the MLS degree). These subjects made judgments of either “relevance,” “utility,” or “topicality” of two document sets to the original users' stated information need. These judgments were then compared to those of the users to see what patterns emerged, and to see what can be learned not only about secondary judgments in general, but also the ways in which information and library professionals make such judgments. These results are interesting in their own right (subject's judgments compared reasonably well to those of users, looked more like users' after more training and experience in library work, and fall into interesting patterns), but they also lead to some provocative questions about the nature of judgment and evaluation of information items. © 1994 John Wiley & Sons, Inc.
This column continues a series on topics in research methodology, statistics and data analysis techniques for the library and information sciences. It discusses surveys, how to write good survey questions, questionnaire design and construction, including the order of questions, instructions, design and layout, and gives suggested readings and references.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.