2017
DOI: 10.1016/j.ecosta.2016.11.008
|View full text |Cite
|
Sign up to set email alerts
|

Neural nets for indirect inference

Abstract: This paper shows how neural networks may be used to approximate the limited information posterior mean, E(θ|Zn), where θ is the parameter vector of a simulable model, and Zn is a vector of statistics. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn E(θ|Zn) with good accuracy. The output of the net can be used as an estimator of the parameter, or, following Ji… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 22 publications
(24 reference statements)
0
17
0
1
Order By: Relevance
“…The dimension of the statistics used for estimation, Z, can be made minimal (equal to the dimension of the parameter to estimate, θ) by filtering an initial set of statistics, say, W, through a trained neural net. Details of this process are explained in Creel (2017) and references cited therein, and the process is made explicit in the code which accompanies this paper 3 . A summary of this process is: Suppose that W is a p vector of statistics W = W(Y), with p ≥ k, where k = dim θ.…”
Section: Neural Momentsmentioning
confidence: 99%
See 3 more Smart Citations
“…The dimension of the statistics used for estimation, Z, can be made minimal (equal to the dimension of the parameter to estimate, θ) by filtering an initial set of statistics, say, W, through a trained neural net. Details of this process are explained in Creel (2017) and references cited therein, and the process is made explicit in the code which accompanies this paper 3 . A summary of this process is: Suppose that W is a p vector of statistics W = W(Y), with p ≥ k, where k = dim θ.…”
Section: Neural Momentsmentioning
confidence: 99%
“…This paper provides experimental evidence that confidence intervals derived from such estimators may have poor coverage when the moments over-identify the parameters, a result that parallels the above cited results for classical GMM estimators. It goes on to provide evidence that the simulated neural moments that were introduced in Creel (2017), which are just-identifying, when used with MSM-MCMC techniques, cause inferences to become much more reliable, especially when the continuously updating version of the GMM criteria is used. This paper is a continuation of the line of research in Creel (2017), its main new contribution being the experimental confirmation that inferences based upon simulated neural moments are reliable.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…They attempted to use regularization methods for training the neural network but did not obtain significant improvement. Creel [ 31 ] also used DNN to find the posterior mean based on a subset of predefined summary statistics rather than using the full dataset. However, the tuning of a large number of free parameters is still an issue in DNN-based ABC.…”
Section: Related Workmentioning
confidence: 99%