2001
DOI: 10.1353/hrq.2001.0041
|View full text |Cite
|
Sign up to set email alerts
|

How are These Pictures Different? A Quantitative Comparison of the US State Department and Amnesty International Human Rights Reports, 1976-1995

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
115
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 138 publications
(116 citation statements)
references
References 30 publications
1
115
0
Order By: Relevance
“…16. There is no evidence of systematic bias in the reports for the period we examine (Poe et al 2001). 17.…”
Section: Notesmentioning
confidence: 86%
“…16. There is no evidence of systematic bias in the reports for the period we examine (Poe et al 2001). 17.…”
Section: Notesmentioning
confidence: 86%
“…Two such measures, the Political Terror Scale (PTS) (Gibney and Dalton 1996) and the CingranelliRichards Index (CIRI) (1999b), score countries' human rights practices according to the content of annual State Department (Innes 1992) and Amnesty International (Ron, Ramos and Rodgers 2005) country reports. Poe, Carey and Vazquez (2001) find that, in the vast majority of cases, State Department and Amnesty International scores are equal, suggesting these measures are an unbiased assessment of human rights practices around the world. Cross-national surveys have shown the people accurately perceive the government's use of domestic repression.…”
Section: Domestic Repressionmentioning
confidence: 91%
“…While most of the providers of original in-house coding (LIED, Polity, V-Dem) use multiple sources (which are generally unspecified), only CIRI makes use of the Country Reports on Human Rights Practices issued by the US State Department. 8 This fact means that the validity to a very high degree depends on the representativeness and impartiality of a single source, which has been accused of being biased-especially in the early releases (see Innes, 1992;Poe, Carey, & Vazquez, 2001;Qian & Yanagizawa, 2009). 9 In the next step, raters can introduce random and systematic measurement errors by interpreting the sources differently, either because they based their evaluation of different pieces of relevant or irrelevant information, because they weight the same evidence differently, or because they have different understandings of concepts and scales guiding the coding process.…”
Section: In-house Coded Datamentioning
confidence: 99%