Consider a variant of the Mastermind game in which queries are ℓp distances, rather than the usual Hamming distance. That is, a codemaker chooses a hidden vector y ∈ {−k, −k + 1, . . . , k − 1, k} n and answers to queries of the form y − x p where x ∈ {−k, −k + 1, . . . , k − 1, k} n . The goal is to minimize the number of queries made in order to correctly guess y.Motivated by this question, in this work, we develop a nonadaptive polynomial time algorithm that works for a natural class of separable distance measures, i.e. coordinate-wise sums of functions of the absolute value. This in particular includes distances such as the smooth max (LogSumExp) as well as many widely-studied M -estimator losses, such as ℓp norms, the ℓ1-ℓ2 loss, the Huber loss, and the Fair estimator loss. When we apply this result to ℓp queries, we obtain an upper bound of O min n, n log k log n queries for any real 1 ≤ p < ∞. We also show matching lower bounds up to constant factors for the ℓp problem, even for adaptive algorithms for the approximation version of the problem, in which the problem is to output y ′ such that y ′ − y p ≤ R for any R ≤ k 1−ε n 1/p for constant ε > 0. Thus, essentially any approximation of this problem is as hard as finding the hidden vector exactly, up to constant factors. Finally, we show that for the noisy version of the problem, i.e. the setting when the codemaker answers queries with any q = (1 ± ε) y − x p , there is no query efficient algorithm.