approximation of (1) involves a kernel density estimation of the intractable likelihood based on s − s obs , the distance between simulated and observed summary statistics. As a result, the quality of the approximation decreases rapidly as the dimension of the summary statistic q increases, as the distance between s and s obs necessarily increases with their dimension, even setting aside the approximations involved in the choice of an informative S(y).Several authors (e.g. Blum 2010, Barber et al. 2015 have given results which illuminate the way that the dimension of the summary statistic q impacts the performance of standard ABC methods. For example, Blum ( 2010) obtains the result that the minimal mean squared error of certain kernel ABC density estimators is of the order of N −4/(q+5) , where N is the number of Monte Carlo samples in the kernel approximation. Barber et al. (2015) consider a simple rejection ABC algorithm where the kernel K h is uniform, and obtain a similar result concerned with optimal estimation of posterior expectations. Biau et al. (2015) extend the analysis of Blum (2010) using a nearest neighbour perspective, which accounts for the common ABC practice of choosing h adaptively based on a large pool of samples (e.g. Blum et al. 2013).Regression adjustments (e.g. Blum 2010; Beaumont et al. 2002;Blum and François 2010;Blum et al. 2013) are extremely valuable in practice for extending the applicability of ABC approximations to higher dimensions, since the regression model has some ability to compensate for the mismatch between the simulated summary statistics s and the observed value s obs . However, except when the true relationship between θ and s is known precisely (allowing for a perfect adjustment), these approaches may only extend ABC applicability to moderately higher dimensions. For example, Nott et al. ( 2014) demonstrated a rough doubling of the number of acceptably estimated parameters for a fixed computational cost when using regression adjustment compared to just rejection sampling, for a simple toy model. Nonparametric regression approaches are also subject to the curse of dimensionality, and the results of Blum (2010) also apply to certain density estimators which include nonparametric regression adjustments. Nevertheless, it has been observed that these theoretical results may