Many of the concepts and procedures of product quality control can be applied to the problem of producing better quality information outputs. From this perspective, information outputs can be viewed as information products, and many information systems can be modeled as information manufacturing systems. The use of information products is becoming increasingly prevalent both within and across organizational boundaries. This paper presents a set of ideas, concepts, models, and procedures appropriate to information manufacturing systems that can be used to determine the quality of information products delivered, or transferred, to information customers. These systems produce information products on a regular or as-requested basis. The model systematically tracks relevant attributes of the information product such as timeliness, accuracy and cost. This is facilitated through an information manufacturing analysis matrix that relates data units and various system components. Measures of these attributes can then be used to analyze potential improvements to the information manufacturing system under consideration. An illustrative example is given to demonstrate the various features of the information manufacturing system and show how it can be used to analyze and improve the system. Following that is an actual application, which, although not as involved as the illustrative example, does demonstrate the applicability of the model and its associated concepts and procedures.Data Quality, Timeliness of Information, Information Product, Information systems, Critical Path
This paper presents a general model to assess the impact of data and process quality upon the outputs of multi-user information-decision systems. The data flow/data processing quality control model is designed to address several dimensions of data quality at the collection, input, processing and output stages. Starting from a data flow diagram of the type used in structured analysis, the model yields a representation of possible errors in multiple intermediate and final outputs in terms of input and process error functions. The model generates expressions for the possible magnitudes of errors in selected outputs. This is accomplished using a recursive-type algorithm which traces systematically the propagation and alteration of various errors. These error expressions can be used to analyze the impact that alternative quality control procedures would have on the selected outputs. The paper concludes with a discussion of the tractability of the model for various types of information systems as well as an application to a representative scenario.information systems: management, reliability: quality control, computers: systems design
This paper describes an experiment that explores the consequences of providing information regarding the quality of data used in decision making. The subjects in the study were given three types of information about the data's quality: none, two-point ordinal, and interval scale. This information was made available to the subjects, along with the actual data. Two decision strategies were explored: conjunctive and weighted linear additive. Two decision environments were used: a simple environment and a relatively complex environment. Various combinations of these factors were employed to explore several issues. These include complacency, consensus, and consistency. The paper provides preliminary insights into which type of data-quaiity information is most effective and the circumstances in which data-quality information is most eflective. Such knowledge would be of value to those responsible for designing databases that suppoll decision-makers. Overall, we find that in a situation where subjects are confronted with clearly differentiated alternatives, the inclusion of data-quality information impacted the Selection of the preferred alternative while maintaining group consensus.
It is well known, of course, that the assessment of this month's economic activity will improve with the passage of time. The same situation exists for many of the inputs to managerial and strategic decision processes. Information regarding some situation or activity at a fixed point in time becomes better with the passage of time. However, as a consequence of the dynamic nature of many environments, the information also becomes less relevant over time. This balance between using current but inaccurate information or accurate but outdated information we call the accuracy-timeliness tradeoff. Through analysis of a generic family of environments, procedures are suggested for reducing the negative consequences of this tradeoff. In many of these situations, rather general knowledge concerning relative weights and shapes of functions is sufficient to determine optimizing strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.