Resumo Sempre que é preciso classificar um conjunto de dados num dado número de categorias, vários tipos (Cohen, 1968). Por fim, algumas críticas feitas a este coeficiente de acordo inter--juízes são sumariamente discutidas.Palavras-chave: Acordo inter-juízes, Coeficiente kappa, Kappa ponderado. AbstractWhenever one needs to classify a set of data in a given number of categories, several types of biases can occur. In order to minimize them, it's frequent to recourse to more than one judge to categorize the same data, analyzing afterwards the degree of their agreement and consequently the reliability of the classification. Among the several interrater agreement indexes mentioned in the literature, kappa coefficient (Cohen, 1960) is referred as the most frequently used when variables in study are nominal. Laboratório de Psicologia, 5(1): 81-90 (2007) © 2007 Acordo inter-juízes: O caso do coeficiente kappaRicardo Fonseca Pedro Silva Rita Silva Instituto Superior de Psicologia Aplicada, PortugalA correspondência relativa a este artigo deverá ser enviada para: Ricardo Fonseca, Instituto Superior de Psicologia Aplicada, Rua Jardim do Tabaco,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.