2011
DOI: 10.1007/s00181-011-0481-0
|View full text |Cite
|
Sign up to set email alerts
|

Propensity score matching and variations on the balancing test

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
88
0
3

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 158 publications
(91 citation statements)
references
References 39 publications
0
88
0
3
Order By: Relevance
“…The off support observations are discarded for having poor or no matches. A standardized difference of 20 is considered to be large (Lee 2013;Rosenbaum and Rubin 1985). This means that the unmatched sample has large differences between the treated and untreated group and a simple comparison between the two may be inadequate.…”
Section: Resultsmentioning
confidence: 99%
“…The off support observations are discarded for having poor or no matches. A standardized difference of 20 is considered to be large (Lee 2013;Rosenbaum and Rubin 1985). This means that the unmatched sample has large differences between the treated and untreated group and a simple comparison between the two may be inadequate.…”
Section: Resultsmentioning
confidence: 99%
“…We therefore examine the quality of the matching using four tests commonly implemented in the evaluation literature (see, e.g., Smith and Todd, 2005b;Lee, 2006;and Girma and Görg, 2007;Caliendo and Kopeinig, 2008;Arnold and Javorcik, 2009). …”
Section: Appendix Assistance Effects With Continuous Outcome Variablesmentioning
confidence: 99%
“…The average reduction ranges from 76.7% to 87.0%, depending on the estimator used. Further, even though there is no formal criterion to identify a standardized bias as "large", following Rosenbaum and Rubin (1985) the usual practice is to consider biases above 20% as large (see, e.g., Smith and Todd, 2005b;Lee, 2006;and Girma and Görg, 2007). As shown in the first panel of Table A1, the standardized differences after matching do not exceed this value for all variables.…”
Section: Appendix Assistance Effects With Continuous Outcome Variablesmentioning
confidence: 99%
“…Additionally, Smith and Todd (2005b) criticize the use of balancing tests per se because they lack formal criteria for determining when the balance is sufficient. In line with this argument, Lee (2013) demonstrated that balancing tests display size problems. For the DW algorithm, he found that the t-test for balance led to rejection in 23.8% of tested cases, instead of the conventional 5%.…”
Section: The Dehejia and Wahba (2002) Algorithm For Reducing Misspecimentioning
confidence: 65%
“…For the DW algorithm, he found that the t-test for balance led to rejection in 23.8% of tested cases, instead of the conventional 5%. To alleviate these high rejection rates, Lee (2013) developed a permutation version of the traditional t-test. This updated test leads to test sizes of 3.5% for the DW algorithm; thus, it is rather conservative.…”
Section: The Dehejia and Wahba (2002) Algorithm For Reducing Misspecimentioning
confidence: 99%