Abstract-Shannon in his 1938Master's Thesis demonstrated that any Boolean function can be realized by a switching relay circuit, leading to the development of deterministic digital logic. Here, we replace each classical switch with a probabilistic switch (pswitch). We present algorithms for synthesizing circuits closed with a desired probability, including an algorithm that generates optimal size circuits for any binary fraction. We also introduce a new duality property for series-parallel stochastic switching circuits. Finally, we construct a universal probability generator which maps deterministic inputs to arbitrary probabilistic outputs. Potential applications exist in the analysis and design of stochastic networks in biology and engineering.
The ill‐posedness of the nonparametric instrumental variable (NPIV) model leads to estimators that may suffer from poor statistical performance. In this paper, we explore the possibility of imposing shape restrictions to improve the performance of the NPIV estimators. We assume that the function to be estimated is monotone and consider a sieve estimator that enforces this monotonicity constraint. We define a constrained measure of ill‐posedness that is relevant for the constrained estimator and show that, under a monotone IV assumption and certain other mild regularity conditions, this measure is bounded uniformly over the dimension of the sieve space. This finding is in stark contrast to the well‐known result that the unconstrained sieve measure of ill‐posedness that is relevant for the unconstrained estimator grows to infinity with the dimension of the sieve space. Based on this result, we derive a novel non‐asymptotic error bound for the constrained estimator. The bound gives a set of data‐generating processes for which the monotonicity constraint has a particularly strong regularization effect and considerably improves the performance of the estimator. The form of the bound implies that the regularization effect can be strong even in large samples and even if the function to be estimated is steep, particularly so if the NPIV model is severely ill‐posed. Our simulation study confirms these findings and reveals the potential for large performance gains from imposing the monotonicity constraint.
This paper provides a constructive argument for identification of nonparametric panel data models with measurement error in a continuous explanatory variable. The approach point identifies all structural elements of the model using only observations of the outcome and the mismeasured explanatory variable; no further external variables such as instruments are required. In the case of two time periods, restricting either the structural or the measurement error to be independent over time allows past explanatory variables or outcomes to serve as instruments. Time periods have to be linked through serial dependence in the latent explanatory variable, but the transition process is left nonparametric. The paper discusses the general identification result in the context of a nonlinear panel data regression model with additively separable fixed effects. It provides a nonparametric plug-in estimator, derives its uniform rate of convergence, and presents simulation evidence for good performance in finite samples. * First version: January 31, 2011. I am particularly indebted to Susanne Schennach, Chris Hansen, Alan Bester, and Azeem Shaikh, and thank them for their advice, encouragement and comments. I also thank Stephane Bonhomme, Federico Bugni, Jean-Marie Dufour, Kirill Evdokimov, Jean-Pierre Florens, Xavier D'Haultfoeuille, James Heckman, Roger Koenker, Arthur Lewbel, Elie Tamer and participants of various conferences and seminars for helpful discussions. Remaining errors are my own. I gratefully acknowledge financial support from the ESRC Centre for Microdata Methods and Practice at IFS (ES/I0334021/1).
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in EconStor may AbstractWe propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly specified or misspecified, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic together with a standard normal critical value can be used. The new procedure controls asymptotic size uniformly over a large class of data generating processes. We demonstrate its finite sample properties in a Monte Carlo experiment and its practical relevance in an empirical application comparing Keynesian versus new classical macroeconomic models.
SignificanceBiological organisms exhibit sophisticated control over the stochastic states of individual cells, but the understanding of underlying molecular mechanisms remains incomplete. It has been argued that unbiased choices are easy to achieve, but choices biased with specific probabilities are much harder. These natural phenomena raise an engineering challenge: Does there exist a simple method to program molecular systems that control arbitrary probabilities for individual molecular events? Here we show a molecular circuit architecture, using just a simple DNA strand displacement building block that functions as an unbiased switch, for creating a circuit output with any desired probability. We constructed several DNA circuits with multiple layers and feedback, demonstrating complex molecular information processing that exploits the inherent stochasticity of molecular interactions.
It is often desired to rank different populations according to the value of some feature of each population. For example, it may be desired to rank neighborhoods according to some measure of intergenerational mobility or countries according to some measure of academic achievement. These rankings are invariably computed using estimates rather than the true values of these features. As a result, there may be considerable uncertainty concerning the rank of each population. In this paper, we consider the problem of accounting for such uncertainty by constructing confidence sets for the rank of each population. We consider both the problem of constructing marginal confidence sets for the rank of a particular population as well as simultaneous confidence sets for the ranks of all populations. We show how to construct such confidence sets under weak assumptions. An important feature of all of our constructions is that they remain computationally feasible even when the number of populations is very large. We apply our theoretical results to re-examine the rankings of both neighborhoods in the United States in terms of intergenerational mobility and developed countries in terms of academic achievement. The conclusions about which countries do best and worst at reading, math, and science are fairly robust to accounting for uncertainty. By comparison, several celebrated findings about intergenerational mobility in the United states are not robust to taking uncertainty into account.
It is often desired to rank different populations according to the value of some feature of each population. For example, it may be desired to rank neighborhoods according to some measure of intergenerational mobility or countries according to some measure of academic achievement. These rankings are invariably computed using estimates rather than the true values of these features. As a result, there may be considerable uncertainty concerning the rank of each population. In this paper, we consider the problem of accounting for such uncertainty by constructing confidence sets for the rank of each population. We consider both the problem of constructing marginal confidence sets for the rank of a particular population as well as simultaneous confidence sets for the ranks of all populations. We show how to construct such confidence sets under weak assumptions. An important feature of all of our constructions is that they remain computationally feasible even when the number of populations is very large. We apply our theoretical results to re-examine the rankings of both neighborhoods in the United States in terms of intergenerational mobility and developed countries in terms of academic achievement. The conclusions about which countries do best and worst at reading, math, and science are fairly robust to accounting for uncertainty. The confidence sets for the ranking of the 50 most populous commuting zones by measures of mobility are also found to be small. These rankings, however, become much less informative if one includes all commuting zones, if one considers neighborhoods at a more granular level (counties, Census tracts), or if one uses movers across areas to address concerns about selection.
This paper proposes a simple nonparametric test of the hypothesis of no measurement error in explanatory variables and of the hypothesis that measurement error, if there is any, does not distort a given object of interest. We show that, under weak assumptions, both of these hypotheses are equivalent to certain restrictions on the joint distribution of an observable outcome and two observable variables that are related to the latent explanatory variable. Existing nonparametric tests for conditional independence can be used to directly test these restrictions without having to solve for the distribution of unobservables. In consequence, the test controls size under weak conditions and possesses power against a large class of nonclassical measurement error models, including many that are not identified. If the test detects measurement error, a multiple hypothesis testing procedure allows the researcher to recover subpopulations that are free from measurement error. Finally, we use the proposed methodology to study the reliability of administrative earnings records in the U.S., finding evidence for the presence of measurement error originating from young individuals with high earnings growth (in absolute terms).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.