In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. Disciplines Communication | Social and Behavioral Sciences
In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. Disciplines Communication | Social and Behavioral Sciences This journal article is available at ScholarlyCommons Abstract In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Design Issues. Introduction 1) Part of this work was supported by the The etymology of design goes back to the Latin de + signare and design firm RichardsonSmith, Worthmeans making something, distinguishing it by a sign, giving it ington, Ohio, and Ohio State University, Columbus, while on sabbatical leave in significance, designating its relation to other things, owners, 1986-87 from the University of Pennsylusers, or gods. Based on this original meaning, one could say: vania, Philadelphia. design is making sense (of things). Design is making sense (of things)The phrase is conveniently ambiguous. It could be read as "design is a sense creating activity" that can claim perception, experience, and, perhaps, esthetics as its fundamental concern and this idea is quite intentional. Or it can be regarded as meaning that "the products of design are to be understandable or meaningful to someone" and that this interpretation is even more desirable. The phrase of things is in parentheses to cast doubt on a third interpretation that "design is concerned with the subjective meanings of 'objectively existing' objects." The parentheses suggest that we cannot talk about things that make no sense at all, that the recognition of something as a thing is already a sensederived distinction, and that the division of the world into a subjective and an objective realm is therefore quite untenable.However, making sense always entails a bit of a paradox between the aim of making something new and different from what was there before, and the desire to have it make sense, to be recognizable and understandable. The former calls for innovation, while the latter calls for the reproduction of historical continuities. In the past, sense was provided by alchemy, mythology, and theology. Now we speak less globally of a symbolic ordering that is constitutive of cognition, culture, and reality. Somehow, the word design has not remained in this creative state of paradox, but has shifted to one side. Its current meaning amplifies the aspect of making or, more specifically, of applying a technical-functional rationality to the material world at the expense of the sense that was to be achieved thereby. Perhaps, the pendulum has swung too far. Perhaps, technology has moved too fast for culture to keep up with it. Whatever the explanation, the current concern with Design Issues: Vol.
Coefficients that assess the reliability of data making processes -coding text, transcribing interviews, or categorizing observations into analyzable terms -are mostly conceptualized in terms of the agreement a set of coders, observers, judges, or measuring instruments exhibit. When variation is low, reliability coefficients reveal their dependency on an often neglected phenomenon, the amount of information that reliability data provide about the reliability of the coding process or the data it generates. This paper explores the concept of reliability, simple agreement, four conceptions of chance to correct that agreement, sources of information deficiency, and develops two measures of information about reliability, akin to the power of a statistical test, intended as a companion to traditional reliability coefficients, especially Krippendorff"s (2004, pp. 221-250;Hayes & Krippendorff, 2007) alpha. Disciplines Communication | Social and Behavioral SciencesThis journal article is available at
This paper reports a new tool for assessing the reliability of text interpretations heretofore unavailable to qualitative research. It responds to a combination of two challenges, the problem of assessing the reliability of multiple interpretations --a solution to this problem was anticipated earlier (Krippendorff, 1992) but not fully developed --and the problem of identifying units of analysis within a continuum of text and similar representations (Krippendorff, 1995). The paper sketches the family of α-coefficients, which this paper extends, and then describes its new arrival. A computational example is included in the Appendix.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.