We study the problem of detecting the presence of an underlying high‐dimensional geometric structure in a random graph. Under the null hypothesis, the observed graph is a realization of an Erdős‐Rényi random graph G(n, p). Under the alternative, the graph is generated from the G(n,p,d) model, where each vertex corresponds to a latent independent random vector uniformly distributed on the sphere double-struckSd−1, and two vertices are connected if the corresponding latent vectors are close enough. In the dense regime (i.e., p is a constant), we propose a near‐optimal and computationally efficient testing procedure based on a new quantity which we call signed triangles. The proof of the detection lower bound is based on a new bound on the total variation distance between a Wishart matrix and an appropriately normalized GOE matrix. In the sparse regime, we make a conjecture for the optimal detection boundary. We conclude the paper with some preliminary steps on the problem of estimating the dimension in G(n,p,d). © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 503–532, 2016
No abstract
We consider the adversarial convex bandit problem and we build the first poly(T )-time algorithm with poly(n) √ T -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves O(n 9.5 √ T )-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T ))-time per step at the cost of an additional poly(n)T o(1) factor in the regret. These results improve upon the O(n 11 √ T )-regret and exp(poly(T ))-time result of the first two authors, and the log(T ) poly(n) √ T -regret and log(T ) poly(n) -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve O(n 1.5 √ T )-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √ T ) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n 3 /ε 2 .
We prove structure theorems for measures on the discrete cube and on Gaussian space, which provide sufficient conditions for mean-field behavior. These conditions rely on a new notion of complexity for such measures, namely the Gaussian-width of the gradient of the log-density. On the cube {−1, 1} n , we show that a measure ν which exhibits low complexity can be written as a mixture of measures {ν θ } θ∈I such that: i. for each θ, the measure ν θ is a small perturbation of ν such that log dν θ dν is a linear function whose gradient is small and, ii. ν θ is close to some product measure, in Wasserstein distance, for most θ. Thus, our framework can be used to study the behavior of low-complexity measures beyond approximation of the partition function, showing that those measures are roughly mixtures of product measures whose entropy is close to that of the original measure. In particular, as a corollary of our theorems, we derive a bound for the naïve mean-field approximation of the log-partition function which improves the nonlinear large deviation framework of Chatterjee and Dembo [2016] in several ways: 1. It does not require any bounds on second derivatives. 2. The covering number is replaced by the weaker notion of Gaussian-width 3. We obtain stronger asymptotics with respect to the dimension. Two other corollaries are decomposition theorems for exponential random graphs and large-degree Ising models. In the Gaussian case, we show that measures of low-complexity exhibit an almost-tight reverse Log-Sobolev inequality.
We extend the Langevin Monte Carlo (LMC) algorithm to compactly supported measures via a projection step, akin to projected Stochastic Gradient Descent (SGD). We show that (projected) LMC allows to sample in polynomial time from a log-concave distribution with smooth potential. This gives a new Markov chain to sample from a log-concave distribution. Our main result shows in particular that when the target distribution is uniform, LMC mixes in O(n 7 ) steps (where n is the dimension). We also provide preliminary experimental evidence that LMC performs at least as well as hit-and-run, for which a better mixing time of O(n 4 ) was proved
We consider the isoperimetric inequality on the class of high-dimensional isotropic convex bodies. We establish quantitative connections between two well-known open problems related to this inequality, namely, the thin shell conjecture, and the conjecture by Kannan, Lovász, and Simonovits, showing that the corresponding optimal bounds are equivalent up to logarithmic factors. In particular we prove that, up to logarithmic factors, the minimal possible ratio between surface area and volume is attained on ellipsoids. We also show that a positive answer to the thin shell conjecture would imply an optimal dependence on the dimension in a certain formulation of the Brunn-Minkowski inequality. Our results rely on the construction of a stochastic localization scheme for log-concave measures.
The Gaussian noise-stability of a set A ⊂ R n is defined bywhere X, Y are standard jointly Gaussian vectors satisfying E[X i Y j ] = δ ij ρ. Borell's inequality states that for all 0 < ρ < 1, among all sets A ⊂ R n with a given Gaussian measure, the quantity S ρ (A) is maximized when A is a half-space.We give a novel short proof of this fact, based on stochastic calculus. Moreover, we prove an almost tight, two-sided, dimension-free robustness estimate for this inequality: by introducing a new metric to measure the distance between the set A and its corresponding half-space H (namely the distance between the two centroids), we show that the deficit S ρ (H) − S ρ (A) can be controlled from both below and above by essentially the same function of the distance, up to logarithmic factors.As a consequence, we also establish the conjectured exponent in the robustness estimate proven by Mossel-Neeman, which uses the total-variation distance as a metric. In the limit ρ → 1, we obtain an improved dimension-free robustness bound for the Gaussian isoperimetric inequality. Our estimates are also valid for a generalized version of stability where more than two correlated vectors are considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.