A common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. Much of the recent focus in the computer experiment literature, where modeling is often via Gaussian process (GP) surrogates, has been on space-filling designs, via maximin distance, Latin hypercube, etc. However, it is easy to demonstrate empirically that such designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. This is true even when the performance metric is prediction-based, or when the target of interest is inherently or eventually sequential in nature, such as in blackbox (Bayesian) optimization. Here we expose such inefficiencies, showing that in many cases a purely random design is superior to higher-powered alternatives. We then propose a family of new schemes by reverse engineering the qualities of the random designs which give the best estimates of GP lengthscales. Specifically, we study the distribution of pairwise distances between design elements, and develop a numerical scheme to optimize those distances for a given sample size and dimension. We illustrate how our distance-based designs, and their hybrids with more conventional space-filling schemes, outperform in both static (one-shot design) and sequential settings. many meta-modeling purposes. GP surrogates are fundamentally the same kriging from the spatial statistics literature (Matheron, 1963), but generally applied in higher dimensional (i.e., > 2d) settings. They are preferred for their simple, partially analytic, nonparametric structure. GPs' out-of-sample predictive accuracy and coverage properties are integral to diverse applications such as Bayesian optimization (BO Jones et al., 1998), calibration (Kennedy andO'Hagan, 2001;Higdon et al., 2004), and input sensitivity analysis (Saltelli et al., 2008). Although there are many variations on GP specification, Chen et al. (2016) nicely summarize how such nuances often have little impact in practice.On the other hand, Chen et al. cite experimental design as playing an out-sized role. Despite GPs' elevation to "canonical" status as surrogates, there has not been quite the same degree of confluence in how to design a computer experiment for the purpose of such modeling. In part this is simply a consequence of different goals emitting different criteria for valuing, and thus selecting, inputs. An exception may be the general agreement that it is sensible, if possible, to proceed sequentially, either one point at a time or in batches. An underlying theme for static (all-at-once) design, or for seeding a sequential design, has been to seek space-fillingness, where the selected inputs are spread out across the study space. For a nice review, see Pronzato and Müller (2011).There are many ways in which a design might be considered space-filling. Maximindistance and minimax-distance design (Johnson et al., 1990) are two common approaches based o...