Though commanding a prominent role in political theory, deliberative democracy has also become a mainstay of myriad other research traditions in recent years. This diffusion has been propelled along by the notion that deliberation, properly conceived and enacted, generates many beneficial outcomes. This article has three goals geared toward understanding whether these instrumental benefits provide us with good reasons -beyond intrinsic ones -to be deliberative democrats. First, the proclaimed instrumental benefits are systematized in terms of micro, meso, and macro outcomes. Second, relevant literatures are canvassed to critically assess what we know -and what we do not know -about deliberation's effects. Finally, the instrumental benefits of deliberation are recast in light of the ongoing systemic turn in deliberative theory. This article adds to our theoretical understanding of deliberation's promises and pitfalls, and helps practitioners identify gaps in our knowledge concerning how deliberation works and what its wider societal implications might be.
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units.
How should small-n researchers aggregate the information collected during their research in an effort to measure the relevant theoretical concepts with high levels of validity and reliability? This article specifically focuses on the method of triangulation, which is frequently used in process-tracing approaches. We introduce and theorise different aggregation strategies commonly used in triangulation, such as weighted and simple averages or 'the winner takes it all' strategy. We then evaluate their performance with regard to their proneness to measurement error using computer simulations. Our simulation results show that averaging different information sources, in general, outperforms other aggregation strategies. However, this is not the case if poorly informed sources are biased in a similar direction; in these situations the 'winner takes it all' strategy shows a superior performance.
Comparative political science has long worried about the performance of multilevel models when the number of upper-level units is small. Exacerbating these concerns, an inuential Monte Carlo study by Stegmueller (2013) suggests that frequentist methods yield biased estimates and severely anti-conservative inference with small upper-level samples. Stegmueller recommends Bayesian techniques, which he claims to be superior in terms of both bias and inferential accuracy. In this paper, we reassess and refute these results. First, we formally prove that frequentist maximum likelihood estimators of coe cients are unbiased. The apparent bias found by Stegmueller is simply a manifestation of Monte Carlo Error. Second, we show how inferential problems can be overcome by using restricted maximum likelihood estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible without turning to Bayesian methods, even if the number of upper-level units is small.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.