Modern studies of legislative behavior focus upon the relationship among the policy preferences of legislators, institutional arrangements, and legislative outcomes. In spatial models of legislatures, policies are represented geometrically, as points in a low-dimensional Euclidean space. Each legislator has a most preferred policy or ideal point in this space and his or her utility for a policy declines with the distance of the policy from his or her ideal point; see Davis, Hinich, and Ordeshook (1970) for an early survey.The primary use of roll call data-the recorded votes of deliberative bodies 1 -is the estimation of ideal points. The appeal and importance of ideal point estimation arises in two ways. First, ideal point estimates let us describe legislators and legislatures. The distribution of ideal points estimates reveals how cleavages between legislators reflect partisan affiliation or region or become more polarized over time (e.g., McCarty, Poole, and Rosenthal 2001). Roll call data serve similar purposes for interest groups, such as Americans for Democratic Action, the National Taxpayers Union, and the Sierra Club, to produce "ratings" of legislators along different policy dimensions. Second, estimates from roll call analysis can be used to test theories of legislative behavior. For instance, roll call analysis has been used Joshua Clinton is Assistant Professor, Department of Politics, Princeton University, Princeton, NJ 08540 (clinton@princeton.edu).Simon Jackman is Associate Professor (Voeten 2000). In short, roll call analysis make conjectures about legislative behavior amenable to quantitative analysis, helping make the study of legislative politics an empirically grounded, cumulative body of scientific knowledge.Current methods of estimating ideal points in political science suffer from both statistical and theoretical deficiencies. First, any method of ideal point estimation embodies an explicit or implicit model of legislative behavior. Generally, it is inappropriate to use ideal points estimated under one set of assumptions (such as sincere voting over a unidimensional policy space) to test a different behavioral model (such as log-rolling). Second, the computations required for estimating even the simplest roll call model are very difficult and extending these models to incorporate more realistic behavioral assumptions is nearly impossible with extant methods. Finally, the statistical basis of current methods for ideal point estimation is, to be polite, questionable. Roll call analysis involves very large numbers of parameters, since each legislator has an ideal point and each bill has a policy location that must be estimated. Popular methods of roll call analysis compute standard errors that are admittedly invalid (Poole and Rosenthal 1997, 246) and one cannot appeal to standard statistical theory to ensure the consistency and other properties of estimators (we revisit this point below).In this paper we develop and illustrate Bayesian methods for ideal point estimation and the analysis of...
A two-step maximum likelihood procedure is proposed for estimating simultaneous probit models and is compared to alternative limited information estimators. Conditions under which these estimators attain the Cramer-Rao lower bound are stated. Simple tests of exogeneity are proposed and are shown to be asymptotically equivalent to one another and to have the same local asymptotic power as classical tests based on the limitedd information maximum likelihood estimator.
This paper generalizes Vuong (1989) asymptotically normal tests for model selection in several important directions. First, it allows for incompletely parametrized models such as econometric models defined by moment conditions. Second, it allows for a broad class of estimation methods that includes most estimators currently used in practice. Third, it considers model selection criteria other than the models' likelihoods such as the mean squared errors of prediction. Fourth, the proposed tests are applicable to possibly misspecified nonlinear dynamic models with weakly dependent heterogeneous data. Cases where the estimation methods optimize the model selection criteria are distinguished from cases where they do not. We also consider the estimation of the asymptotic variance of the difference between the competing models' selection criteria, which is necessary to our tests. Finally, we discuss conditions under which our tests are valid. It is seen that the competing models must be essentially nonnested.
The General Agreement on Tariffs and Trade~GATT! and the World Trade Organization~WTO! have been touted as premier examples of international institutions, but few studies have offered empirical proof+ This article comprehensively evaluates the effects of the GATT0WTO and other trade agreements since World War II+ Our analysis is organized around two factors: institutional standing and institutional embeddedness+ We show that many countries had rights and obligations, or institutional standing, in the GATT0WTO even though they were not formal members of the agreement+ We also expand the analysis to include a range of other commercial agreements that were embedded with the GATT0WTO+ Using data on dyadic trade since 1946, we demonstrate that the GATT0WTO substantially increased trade for countries with institutional standing, and that other embedded agreements had similarly positive effects+ Moreover, our evidence suggests that international trade agreements have complemented, rather than undercut, each other+ When and how do international institutions promote cooperation? Few questions are as fundamental to international relations or as salient for world leaders+ Due to the contributions of Keohane and others, we now have sophisticated theories about the emergence and effects of international institutions, but empirical research has not proceeded apace+ 1 As Frieden and Martin point out, "theoretical work on interAn earlier version of this article was presented at the
A BSTRACT In 2006 Polimetrix, Inc. of Palo Alto, CA. fielded the Cooperative Congressional Election Study, the largest study of Congressional elections ever fielded in the US. The project was a joint venture of 38 universities and over 100 political scientists. In this paper, we detail the design and execution of the project, with special attention to the method by which the sample was generated. We show that the estimates from the Common Content of CCES outperform conventional estimates based on RDD phone surveys. We also argue that opt-in panels, internet surveys, and cooperative ventures like CCES provide cost-effective alternatives for social scientists under certain conditions. These types of surveys can provide reductions in RMSE over conventional methods when sample matching is used to ameliorate the biases that come with sampling from an opt-in panel.
Missing data are common in observational studies due to self-selection of subjects. Missing data can bias estimates of linear regression and related models. The nature of selection bias and econometric methods for correcting it are described. The econometric approach relies upon a specification of the selection mechanism. We extend this approach to binary logit and probit models and provide a simple test for selection bias in these models. An anal ysis of candidate preference in the 1984 U.S. presidential election illustrates the technique. SELECTION BIAS IN LINEAR REGRESSION, LOGIT AND PROBIT MODELS
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.