The paper studies the convergence properties of (continuous) best-response dynamics from game theory. Despite their fundamental role in game theory, best-response dynamics are poorly understood in many games of interest due to the discontinuous, set-valued nature of the best-response map. The paper focuses on elucidating several important properties of best-response dynamics in the class of multi-agent games known as potential games-a class of games with fundamental importance in multi-agent systems and distributed control. It is shown that in almost every potential game and for almost every initial condition, the best-response dynamics (i) have a unique solution, (ii) converge to pure-strategy Nash equilibria, and (iii) converge at an exponential rate.
The goal of this paper is to solve a long standing open problem, namely, the asymptotic development of order 2 by Γ-convergence of the mass-constrained Cahn-Hilliard functional.
The note considers normalized gradient descent (NGD), a natural modification of classical gradient descent (GD) in optimization problems. A serious shortcoming of GD in nonconvex problems is that GD may take arbitrarily long to escape from the neighborhood of a saddle point. This issue can make the convergence of GD arbitrarily slow, particularly in highdimensional non-convex problems where the relative number of saddle points is often large. The paper focuses on continuoustime descent. It is shown that, contrary to standard GD, NGD escapes saddle points "quickly." In particular, it is shown that (i) NGD "almost never" converges to saddle points and (ii) the time required for NGD to escape from a ball of radius r about a saddle point x * is at most 5 √ κr, where κ is the condition number of the Hessian of f at x * . As an application of this result, a global convergence-time bound is established for NGD under mild assumptions. * These authors contributed equally.
In centralized settings, it is well known that stochastic gradient descent (SGD) avoids saddle points. However, similar guarantees are lacking for distributed first-order algorithms in nonconvex optimization. The paper studies distributed stochastic gradient descent (D-SGD)-a simple network-based implementation of SGD. Conditions under which D-SGD converges to local minima are studied. In particular, it is shown that, for each fixed initialization, with probability 1 we have that: (i) D-SGD converges to critical points of the objective and (ii) D-SGD avoids nondegenerate saddle points. To prove these results, we use ODE-based stochastic approximation techniques. The algorithm is approximated using a continuous-time ODE which is easier to study than the (discrete-time) algorithm. Results are first derived for the continuous-time process and then extended to the discrete-time algorithm. Consequently, the paper studies continuous-time distributed gradient descent (DGD) alongside D-SGD. Because the continuoustime process is easier to study, this approach allows for simplified proof techniques and builds important intuition that is obfuscated when studying the discrete-time process alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.