This paper introduces a novel optimization method for differential neural architecture search, based on the theory of prediction with expert advice. Its optimization criterion is well fitted for an architecture-selection, i.e., it minimizes the regret incurred by a sub-optimal selection of operations. Unlike previous search relaxations, that require hard pruning of architectures, our method is designed to dynamically wipe out inferior architectures and enhance superior ones. It achieves an optimal worst-case regret bound and suggests the use of multiple learning-rates, based on the amount of information carried by the backward gradients. Experiments show that our algorithm achieves a strong performance over several image classification datasets. Specifically, it obtains an error rate of 1.6% for CIFAR-10, 24% for ImageNet under mobile settings, and achieves state-of-the-art results on three additional datasets.
Bayesian optimization is a popular method for optimizing expensive black-box functions. The objective functions of hard real world problems are oftentimes characterized by a fluctuated landscape of many local optima. Bayesian optimization risks in overexploiting such traps, remaining with insufficient query budget for exploring the global landscape. We introduce Coordinate Backoff Bayesian optimization (CobBO) to alleviate those challenges. CobBO captures a smooth approximation of the global landscape by interpolating the values of queried points projected to randomly selected promising coordinate subspaces. Thus also a smaller query budget is required for the Gaussian process regressions applied over the lower dimensional subspaces. This approach can be viewed as a variant of coordinate ascent, tailored for Bayesian optimization, using a stopping rule for backing off from a certain subspace and switching to another coordinate subset. Additionally, adaptive trust regions are dynamically formed to expedite the convergence, and stagnant local optima are escaped by switching trust regions. Further smoothness and acceleration are achieved by filtering out clustered queried points. Through comprehensive evaluations over a wide spectrum of benchmarks, CobBO is shown to consistently find comparable or better solutions, with a reduced trial complexity compared to the state-of-the-art methods in both low and high dimensions.
No abstract
In this work, we deal with market frictions which are given by fixed transaction costs independent of the volume of the trade. The main question that we study is the minimization of shortfall risk in the Black–Scholes (BS) model under constraints on the initial capital. This problem does not have an analytical solution and so numerical schemes come into the picture. The Cox–Ross–Rubinstein (CRR) binomial models are an efficient tool for approximating the BS model. In this paper, we study in detail the CRR models with fixed transaction costs. In particular, we construct an augmented state-action space forming a Markov decision process (MDP) and provide a proof for the existence of optimal control/policy. We further suggest a dynamic programming algorithm for calculating the optimal hedging strategy and its corresponding shortfall risk. In the absence of transaction costs, there is an analytical solution in both CRR and BS models, and so we use them for testing our algorithm and its convergence. Moreover, we point out various insights provided by our numerical results, for example, regarding the change in the investor’s activity in the presence of friction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.