This paper shows how neural networks may be used to approximate the limited information posterior mean, E(θ|Zn), where θ is the parameter vector of a simulable model, and Zn is a vector of statistics. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn E(θ|Zn) with good accuracy. The output of the net can be used as an estimator of the parameter, or, following Jiang et al. (2015), as an input to subsequent classical or Bayesian indirect inference estimation. Targeting E(θ|Zn) using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean, E(θ|Yn), where Yn is the sample data. Code to replicate the examples and to use the methods for other models is available at https://github.com/mcreel/NeuralNetsForIndirectInference.jl. This code uses the Mocha.jl package for the Julia language, which allows for easy access to GPU computing, which greatly accelerates training the net.