2018
DOI: 10.48550/arxiv.1808.02569
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Learning for Dynamic Discrete Choice

Abstract: Dynamic discrete choice models often discretize the state vector and restrict its dimension in order to achieve valid inference. I propose a novel two-stage estimator for the set-identified structural parameter that incorporates a high-dimensional state space into the dynamic model of imperfect competition. In the first stage, I estimate the state variable's law of motion and the equilibrium policy function using machine learning tools. In the second stage, I plug the firststage estimates into a moment inequal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Second, if the required state space is indeed large as suggested by applied work, then applications of machine learning tools to conduct inference on dynamic games will be a promising direction. For example, Semenova (2018) extends the two-step set inference approach in Bajari et al (2007), where the state space (and thus the dimension of p) is high-dimensional, by applying the Neyman orthogonalized moment function and cross fitting to deal with the bias in the first stage estimation of p using machine learning methods (e.g., Chernozhukov et al, 2018). This methodology may be extended to other estimation strategies and accommodate unobserved heterogeneity and multiple equilibria.…”
Section: Discussionmentioning
confidence: 99%
“…Second, if the required state space is indeed large as suggested by applied work, then applications of machine learning tools to conduct inference on dynamic games will be a promising direction. For example, Semenova (2018) extends the two-step set inference approach in Bajari et al (2007), where the state space (and thus the dimension of p) is high-dimensional, by applying the Neyman orthogonalized moment function and cross fitting to deal with the bias in the first stage estimation of p using machine learning methods (e.g., Chernozhukov et al, 2018). This methodology may be extended to other estimation strategies and accommodate unobserved heterogeneity and multiple equilibria.…”
Section: Discussionmentioning
confidence: 99%
“…Norets (2012) uses an artificial neural network to estimate the value function taking state variables and parameters of interest as input. Semenova (2018) offers a simulation-based method using machine learning. She estimates the state transition and decision probabilities by machine learning models in the first stage and uses them to find the underlying decision parameters in the second stage.…”
Section: Related Literaturementioning
confidence: 99%
“…In addition to the accuracy concerns, model specification choices such as selecting the appropriate covariates and discretizing the state space are challenging, especially in complex and high-dimensional settings. Researchers usually have little intuition about which covariates to select or how to discretize the state space (Semenova, 2018). As a result, many estimation approaches avoid this trade-off by proposing value function estimation procedures that do not require solving a dynamic programming problem (Hotz and Miller, 1993;Hotz et al, 1994;Keane and Wolpin, 1994;Norets, 2012;Arcidiacono et al, 2013;Semenova, 2018).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of those approaches are related to the literature on dynamic discrete choice model (see Aguirregabiria and Mira (2010) for a survey, or Semenova (2018) for connections with machine learning tools). In those models, there is a finite set of possible actions A, as assumed also in the previous descriptions, and they focus on conditional choice probability, which is the probability that choosing a ∈ A is optimal in state s ∈ S, ccp(a|s) = P[a is optimal in state s] = P {Q(a, s) ≥ Q(a , s), ∀a ∈ A} .…”
Section: Inverse Reinforcement Learningmentioning
confidence: 99%