1999
DOI: 10.1093/bjps/50.2.283
|View full text |Cite
|
Sign up to set email alerts
|

How to Weight Scientists' Probabilities Is Not a Big Problem: Comment on Barnes

Abstract: Assuming it rational to treat other persons' probabilities as epistemically significant, how shall their judgements be weighted (Barnes [1998])? Several plausible methods exist, but theorems in classical psychometrics greatly reduce the importance of the problem. If scientists' judgements tend to be positively correlated, the difference between two randomly weighted composites shrinks as the number of judges rises. Since, for reasons such as representative coverage, minimizing bias, and avoiding elitism, we wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2001
2001
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…Concerns about interrater reliability may naturally arise when considering this manner of coding. However, we note that such content-coding approaches have proven relatively reliable in previous projects (e.g., Amendola & Wixted, 2015; Gould et al, 2014; Horry et al, 2014); further, even relatively poor pairwise interrater reliability can be overcome by modestly increasing the pool of raters (viz., Spearman–Brown prophecy formula; see Meehl, 1999b). Encouragingly, empirical research suggests that aggregating judgments of forensic evidence (even from novices) may be more accurate than the individual judgments of expert evaluators (Tangen et al, 2020).…”
Section: Sketch Of a Taxometric Program To Estimate Real-world Guilty...mentioning
confidence: 99%
“…Concerns about interrater reliability may naturally arise when considering this manner of coding. However, we note that such content-coding approaches have proven relatively reliable in previous projects (e.g., Amendola & Wixted, 2015; Gould et al, 2014; Horry et al, 2014); further, even relatively poor pairwise interrater reliability can be overcome by modestly increasing the pool of raters (viz., Spearman–Brown prophecy formula; see Meehl, 1999b). Encouragingly, empirical research suggests that aggregating judgments of forensic evidence (even from novices) may be more accurate than the individual judgments of expert evaluators (Tangen et al, 2020).…”
Section: Sketch Of a Taxometric Program To Estimate Real-world Guilty...mentioning
confidence: 99%
“…However, such problems are often expected in a new area, essential methodological challenges seem surmountable, and, at least at first, there are good reasons to assume that even relatively unrefined or crude approaches will produce gains (for a fuller discussion of some of these issues, see Faust, 1984;Faust & Meehl, 2002;Meehl, 1992Meehl, , 1999. For example, although one might think that the problems inherent in gathering some of the needed data bases are 2 The boundaries between "regular" science and meta-science can be fuzzy or overlapping, and the second example I will discuss of meta-scientific study might seem to be more regular science than meta-science.…”
Section: Two Examples Of Meta-science Studiesmentioning
confidence: 99%
“…Absent various meta-analyses across diverse literatures, there would be no informed, evidence-based way to establish proportionality in the importance of the risk factors listed in Table 3. Everyone would be prey to scientific salespersons, zealots, and pressure groups of various persuasions and motivations (see Meehl, 1999). R. W. Heinrichs (2001) has shown a way out of the wilderness by conducting meta-analyses of 54 English-language litera-tures reporting the research with many of the risk factors alleged to be important in the etiology of, or correlated to, schizophrenia.…”
Section: Conceptual Weights For Risk Factors In Liability To Schizoph...mentioning
confidence: 99%