Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientific computing routines. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero1 for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago2. We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor’s ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria.
In this paper we present for the first time examples of algebraic limit cycles and saddle loops of degree greater than 4 for planar quadratic systems. In particular, we give examples of algebraic limit cycles of degree 5 and 6, and algebraic saddle loops of degree 3 and 5 surrounding a strong focus. We also give an example of an invariant algebraic curve of degree 12 for which the quadratic system has no Darboux integrating factors or first integrals.
Training of a neural network is often formulated as a task of finding a "good" minimum of an error surface -the graph of the loss expressed as a function of its weights. Due to the growing popularity of deep learning, the classical problem of studying the error surfaces of neural networks is now in the focus of many researchers. This stems from a long standing question. Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima? As much as it is often observed in practice that training of deep models using gradient methods works well, little is understood about why it happens. A lot of research efforts has been dedicated recently for proving the good behavior of training neural networks. In this paper we adapt the complementary approach of studying the possible obstacles. We present several concrete examples of datasets which cause the error surface to have a strongly suboptimal local minimum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.