New battery technology will be crucial to the electrification of transportation and aviation 1, 2 , but battery innovations can take years to deliver. For battery electrolytes, the many design variables present in selecting multiple solvents, salts, and their relative ratios [3][4][5][6][7] mean that optimization studies are slow and laborious, even those restricted to small search spaces. The key challenge is to lower the number and time-cost of experiments needed to formulate an electrolyte for a given objective.
Bayesian Optimisation (BO), refers to a suite of techniques for global optimisation of expensive black box functions, which use introspective Bayesian models of the function to efficiently find the optimum. While BO has been applied successfully in many applications, modern optimisation tasks usher in new challenges where conventional methods fail spectacularly. In this work, we present Dragonfly, an open source Python library for scalable and robust BO. Dragonfly incorporates multiple recently developed methods that allow BO to be applied in challenging real world settings; these include better methods for handling higher dimensional domains, methods for handling multi-fidelity evaluations when cheap approximations of an expensive function are available, methods for optimising over structured combinatorial spaces, such as the space of neural network architectures, and methods for handling parallel evaluations. Additionally, we develop new methodological improvements in BO for selecting the Bayesian model, selecting the acquisition function, and optimising over complex domains with different variable types and additional constraints. We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO. The Dragonfly library is available at dragonfly.github.io.
Large pre-trained language models are wellestablished for their ability to generate text seemingly indistinguishable from humans. In this work, we study the problem of constrained sampling from such language models. That is, generating text that satisfies userdefined constraints. Typical decoding strategies which generate samples left-to-right are not always conducive to imposing such constraints globally. Instead, we propose MU-COLA-a sampling procedure that combines the log-likelihood of the language model with arbitrary differentiable constraints into a single energy function; and generates samples by initializing the entire output sequence with noise and following a Markov chain defined by Langevin Dynamics using the gradients of this energy. We evaluate our approach on text generation with soft and hard constraints as well as their combinations with competitive results for toxicity avoidance, sentiment control, and keyword guided generation. 1
<div> <div> <div> <p>Innovations in batteries take years to formulate, requiring extensive experimentation during the design and optimization phases. We approach the design of a battery electrolyte as a black-box optimization problem. We report here the discovery of a novel battery electrolyte by a robotic electrolyte experiment guided by machine-learning software. Motivated by the recent trend toward super-concentrated aqueous electrolytes for high-performance batteries, we utilize Dragonfly - a Bayesian machine-learning software package - to search mixtures of commonly used lithium and sodium salts for super-concentrated aqueous electrolytes with wide electrochemical stability windows. Dragonfly autonomously managed the robotic test-stand, recommending electrolyte designs to test and receiving experimental feed- back in real time. Within 40 hours of continuous experimentation, Dragonfly discovered a novel, high-performing aqueous sodium electrolyte that a human-guided design process may have missed. This result demonstrates the possibility of integrating robotics with machine-learning to rapidly and autonomously discover novel battery materials.</p></div></div></div>
Many real world applications can be framed as multi-objective optimization problems, where we wish to simultaneously optimize for multiple criteria. Bayesian optimization techniques for the multi-objective setting are pertinent when the evaluation of the functions in question are expensive. Traditional methods for multi-objective optimization, both Bayesian and otherwise, are aimed at recovering the Pareto front of these objectives. However, in certain cases a practitioner might desire to identify Pareto optimal points only in a particular region of the Pareto front due to external considerations. In this work, we propose a strategy based on random scalarizations of the objectives that addresses this problem. While being computationally similar or cheaper than other approaches, our approach is flexible enough to sample from specified subsets of the Pareto front or the whole of it. We also introduce a novel notion of regret in the multi-objective setting and show that our strategy achieves sublinear regret. We experiment with both synthetic and real-life problems, and demonstrate superior performance of our proposed algorithm in terms of flexibility, scalability and regret.
In this paper we extend the known results of analytic connectivity to non-uniform hypergraphs. We prove a modified Cheeger's inequality and also give a bound on analytic connectivity with respect to the degree sequence and diameter of a hypergraph.
Abstract-Multiview assisted learning has gained significant attention in recent years in supervised learning genre. Availability of high performance computing devices enables learning algorithms to search simultaneously over multiple views or feature spaces to obtain an optimum classification performance. The paper is a pioneering attempt of formulating a mathematical foundation for realizing a multiview aided collaborative boosting architecture for multiclass classification. Most of the present algorithms apply multiview learning heuristically without exploring the fundamental mathematical changes imposed on traditional boosting. Also, most of the algorithms are restricted to two class or view setting. Our proposed mathematical framework enables collaborative boosting across any finite dimensional view spaces for multiclass learning. The boosting framework is based on forward stagewise additive model which minimizes a novel exponential loss function. We show that the exponential loss function essentially captures difficulty of a training sample space instead of the traditional '1/0' loss. The new algorithm restricts a weak view from over learning and thereby preventing overfitting. The model is inspired by our earlier attempt [1] on collaborative boosting which was devoid of mathematical justification. The proposed algorithm is shown to converge much nearer to global minimum in the exponential loss space and thus supersedes our previous algorithm. The paper also presents analytical and numerical analysis of convergence and margin bounds for multiview boosting algorithms and we show that our proposed ensemble learning manifests lower error bound and higher margin compared to our previous model. Also, the proposed model is compared with traditional boosting and recent multiview boosting algorithms. In majority instances the new algorithm manifests faster rate of convergence on training set error and simultaneously also offers better generalization performance. Kappa-error diagram analysis reveals the robustness of the proposed boosting framework to labeling noise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.