Background—
The implementation of Target: Stroke Phase I, the first stage of the American Heart Association’s national quality improvement initiative to accelerate door-to-needle (DTN) times, was associated with an average 15-minute reduction in DTN times. Target: Stroke phase II was launched in April 2014 with a goal of promoting further reduction in treatment times for tissue-type plasminogen activator (tPA) administration.
Methods and Results—
We conducted a second survey of Get With The Guidelines-Stroke hospitals regarding strategies used to reduce delays after Target: Stroke and quantify their association with DTN times. A total of 16 901 ischemic stroke patients were treated with intravenous tPA within 4.5 hours of symptom onset from 888 surveyed hospitals between June 2014 and April 2015. The patient-level median DTN time was 56 minutes (interquartile range, 42–75), with 59.3% of patients receiving intravenous tPA within 60 minutes and 30.4% within 45 minutes after hospital arrival. Most hospitals reported routinely using a majority of Target: Stroke key practice strategies, although direct transport of patients to computed tomographic/magenetic resonance imaging scanner, premix of tPA ahead of time, initiation of tPA in brain imaging suite, and prompt data feedback to emergency medical services providers were used less frequently. Overall, we identified 16 strategies associated with significant reductions in DTN times. Combined, a total of 20 minutes (95% confidence intervals 15–25 minutes) could be saved if all strategies were implemented.
Conclusions—
Get With The Guidelines-Stroke hospitals have initiated a majority of Target: Stroke–recommended strategies to reduce DTN times in acute ischemic stroke. Nevertheless, certain strategies were infrequently practiced and represent a potential immediate target for further improvements.
and Yale for useful comments. Timothy Schwuchow provided excellent research assistance. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peerreviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications.
This paper establishes conditions for nonparametric identification of dynamic optimization models in which agents make both discrete and continuous choices. We consider identification of both the payoff function and the distribution of unobservables. Models of this kind are prevalent in applied microeconomics and many of the required conditions are standard assumptions currently used in empirical work. We focus on conditions on the model that can be implied by economic theory and assumptions about the data generating process that are likely to be satisfied in a typical application. Our analysis is intended to highlight the identifying power of each assumption individually, where possible, and our proofs are constructive in nature.
In this paper, non-linear least squares (NLLS) estimators are proposed for semiparametric binary response models under conditional median restrictions. The estimators can be identical to NLLS procedures for parametric binary response models (e.g. probit), and consequently have the advantage of being easily implementable using standard software packages such as Stata. This is in contrast to existing estimators for the model, such as the maximum score estimator and the smoothed maximum score (SMS) estimator. Two simple bias correction methods-a proposed jackknife method and an alternative non-linear regression function-result in the same rate of convergence as SMS. The results from a Monte Carlo study show that the new estimators perform well in finite samples.
SUMMARYThis paper develops estimators for dynamic microeconomic models with serially correlated unobserved state variables using sequential Monte Carlo methods to estimate the parameters and the distribution of the unobservables. If persistent unobservables are ignored, the estimates can be subject to a dynamic form of sample selection bias. We focus on single-agent dynamic discrete-choice models and dynamic games of incomplete information. We propose a full-solution maximum likelihood procedure and a two-step method and use them to estimate an extended version of the capital replacement model of Rust with the original data and in a Monte Carlo study.
We develop and estimate a dynamic game of strategic firm expansion and contraction decisions to study the role of firm size on future profitability and market dominance. Modeling firm size is important because retail chain dynamics are more richly driven by expansion and contraction than de novo entry or permanent exit. Additionally, anticipated size spillovers may influence the strategies of forward looking firms making it difficult to analyze the effects of size without explicitly accounting for these in the expectations and, hence, decisions of firms. Expansion may also be profitable for some firms while detrimental for others. Thus, we explicitly model and allow for heterogeneity in the dynamic link between firm size and profits as well as potential for persistent brand effects through a firmspecific unobservable. As a methodological contribution, we surmount the hurdle of estimating the model by extending the Bajari, Benkard and Levin (2007) two-step procedure that circumvents solving the game. The first stage combines semi-parametric conditional choice probability estimation with a particle filter to integrate out the serially correlated unobservables. The second stage uses a forward simulation approach to estimate the payoff parameters. Data on Canadian hamburger chains from their inception in 1970 to 2005 provides evidence of firm-specific heterogeneity in brand effects, size spillovers and persistence in profitability. This heterogeneous dynamic linkage shows how McDonald's becomes dominant and other chains falter as they evolve, thus affecting market structure and industry concentration.
This paper provides a method for estimating large-scale dynamic discrete choice models (in both single-and multi-agent settings) within a continuous time framework. The advantage of working in continuous time is that state changes occur sequentially, rather than simultaneously, avoiding a substantial curse of dimensionality that arises in multi-agent settings. Eliminating this computational bottleneck is the key to providing a seamless link between estimating the model and performing post-estimation counterfactuals. While recently developed two-step estimation techniques have made it possible to estimate large-scale problems, solving for equilibria remains computationally challenging. In many cases, the models that applied researchers estimate do not match the models that are then used to perform counterfactuals. By modeling decisions in continuous time, we are able to take advantage of the recent advances in estimation while preserving a tight link between estimation and policy experiments. We also consider estimation in situations with imperfectly sampled data, such as when we do not observe the decision not to move, or when data is aggregated over time, such as when only discrete-time data are available at regularly spaced intervals. We illustrate the power of our framework using several large-scale Monte Carlo experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.