One cannot manage information quality (IQ) without first being able to measure it meaningfully and establishing a causal connection between the source of IQ change, the IQ problem types, the types of activities affected, and their implications. In this article we propose a general IQ assessment framework. In contrast to context-specific IQ assessment models, which usually focus on a few variables determined by local needs, our framework consists of comprehensive typologies of IQ problems, related activities, and a taxonomy of IQ dimensions organized in a systematic way based on sound theories and practices. The framework can be used as a knowledge resource and as a guide for developing IQ measurement models for many different settings. The framework was validated and refined by developing specific IQ measurement models for two large-scale collections of two large classes of information objects: Simple Dublin Core records and online encyclopedia articles.
The classic problem within the information quality (IQ) research and practice community has been the problem of defining IQ. It has been found repeatedly that IQ is context sensitive and cannot be described, measured, and assured with a single model. There is a need for empirical case studies of IQ work in different systems to develop a systematic knowledge that can then inform and guide the construction of context-specific IQ models. This article analyzes the organization of IQ assurance work in a large-scale, open, collaborative encyclopediaWikipedia. What is special about Wikipedia as a resource is that the quality discussions and processes are strongly connected to the data itself and are accessible to the general public. This openness makes it particularly easy for researchers to study a particular kind of collaborative work that is highly distributed and that has a particularly substantial focus, not just on error detection but also on error correction. We believe that the study of those evolving debates and processes and of the IQ assurance model as a whole has useful implications for the improvement of quality in other more conventional databases. IntroductionLarge-scale, continuously evolving, open collaborative content creation systems such as Wikipedia have become increasingly popular. At the same time, in an attempt to lower the bottom line, many traditional publishers and informationintensive organizations have opened their content creation processes to the general public by adding wikis and blogs to their regular channels of information creation and distribution. We are witnessing the establishment of a dynamic grid of large-scale, open information systems fueled by active participation from the general public in content creation and quality assurance activities. Although providing valuable information services to the users, the new information grid also poses new and significant challenges in many areas of information organization, including information quality (IQ).These new systems have complex, dynamic workflows that need to react successfully to changes in both their communities and the environment, including identifying the most effective and efficient IQ assurance interventions for different circumstances. Furthermore, the concept of IQ itself is context sensitive (Wang & Strong, 1996). The same information can be judged as being of different quality depending on the context of a particular use and the individual or community value structures for quality. Hence, no one fixed model of IQ assurance can be applied for all these systems. There is a need for empirical studies of existing IQ assurance models, with a goal to develop a knowledge base of conceptual models of IQ, taxonomies of quality problems and activities, metrics, trade-offs, strategies, policies, and references sources. The knowledge base can then be reused for constructing context-specific IQ assurance models faster, cheaper, and with less effort. The English Wikipedia, with its large-scale, complex, and collaborative information...
Open source communities have successfully developed a great deal of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability.
We explore how some open source projects address issues of usability. We describe the mechanisms, techniques and technology used by open source communities to design and refine the interfaces to their programs. In particular we consider how these developers cope with their distributed community, lack of domain expertise, limited resources and separation from their users. We also discuss how bug reporting and discussion systems can be improved to better support bug reporters and open source developers. Copyright © 2006 John Wiley & Sons, Ltd.
This paper describes ethnomethodologically informed ethnography (EM) as a methodology for information science research, illustrating the approach with the results of a study in a university library. We elucidate major differences between the practical orientation of EM and theoretical orientation of other ethnographic approaches in information science research. We address ways in which EM may be used to inform systems design and consider the issues that arise in coordinating the results of this research with the needs of information systems designers. We outline our approach to the “ethnographically informed” development of information systems in addressing some of the major problems of interdisciplinary work between system designers and EM researchers.
The paper reviews work on informal technical help giving between colleagues. It concentrates on the process of how colleagues help each other to use a computer application to achieve a specific work task, contrasting this with the focus of much prior work on surrounding issues like the choice of whom to ask, information re-use and the larger work context of encouragement or otherwise of such learning. By an analysis of the literature and a study of office activity, some strengths and weaknesses of the method are identified. The difficulties of talking about the process of performing graphical user interface actions are explored. Various design implications for functionalities to improve the efficiency of informal help giving are explored. A consideration of informal learning can help in designing more effective, learnable, robust and acceptable CSCW systems. It also provides a different perspective on interface design as an exploration of features to support human-human interaction, using the computer screen as a shared resource to support this. In this way CSCW research may contribute to HCI research, since during such help giving, all computer systems are at least temporarily collaborative applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.