2020 57th ACM/IEEE Design Automation Conference (DAC) 2020
DOI: 10.1109/dac18072.2020.9218592
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Asynchronous Batch Bayesian Optimization Approach for Analog Circuit Synthesis

Abstract: In this paper, we propose EasyBO, an Efficient ASYnchronous Batch Bayesian Optimization approach for analog circuit synthesis. In this proposed approach, instead of waiting for the slowest simulations in the batch to finish, we accelerate the optimization procedure by asynchronously issuing the next query points whenever there is an idle worker. We introduce a new acquisition function which can better explore the design space for asynchronous batch Bayesian optimization. A new strategy is proposed to better ba… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(4 citation statements)
references
References 34 publications
(33 reference statements)
0
4
0
Order By: Relevance
“…Modern EDA tools for HLS, logic synthesis, or physical design synthesis have many parameters, GPR + WEI Amp-1,2 (C-5,C-24), etc., First few apply BO for analog sizing [73] GPR + LCB Amp-1,2 (C-5,C-24), etc., NSGAII solves multi-objective LCB [74] GPR + Acq ensemble Amp-1,2 (C-10,C-12) Batch BO enabled by the ensemble [75] GPR + WEI Op-Amp (C-10), CP (C-36) Neural network as GPR kernel [77] BNN + LCB CP (C-16), Amp (C-13) BNN as surrogate model [76] GPR + WEI Amp (C-5), CP (C-36) Multi-fidelity BO [78] GPC + its Eq. ( 45) Op-Amp(C-4), LNA (C-4) Handles binary testing outputs [82] GPR + WEI CP (C-18, D-5), etc., Modified kernel for D variables [79] GPR + UCB Ampifier-1,2 (C-10,C-12) Asynchronous BO [80] SP-GPR + WEI Amp (C-24), VCO (C-20) Sparse GPR model [81] GPR + EI Op-Amp (variable unknown) Classical BO [88] MT-GPR + WEI Op-Amp (C-10), LNA (C-10) MT-GPR learns multi-performances [83] GPR + TS Amp-1,2,3 (C-23, C-43, C-21)…”
Section: B Other Problems and Discussionmentioning
confidence: 99%
“…Modern EDA tools for HLS, logic synthesis, or physical design synthesis have many parameters, GPR + WEI Amp-1,2 (C-5,C-24), etc., First few apply BO for analog sizing [73] GPR + LCB Amp-1,2 (C-5,C-24), etc., NSGAII solves multi-objective LCB [74] GPR + Acq ensemble Amp-1,2 (C-10,C-12) Batch BO enabled by the ensemble [75] GPR + WEI Op-Amp (C-10), CP (C-36) Neural network as GPR kernel [77] BNN + LCB CP (C-16), Amp (C-13) BNN as surrogate model [76] GPR + WEI Amp (C-5), CP (C-36) Multi-fidelity BO [78] GPC + its Eq. ( 45) Op-Amp(C-4), LNA (C-4) Handles binary testing outputs [82] GPR + WEI CP (C-18, D-5), etc., Modified kernel for D variables [79] GPR + UCB Ampifier-1,2 (C-10,C-12) Asynchronous BO [80] SP-GPR + WEI Amp (C-24), VCO (C-20) Sparse GPR model [81] GPR + EI Op-Amp (variable unknown) Classical BO [88] MT-GPR + WEI Op-Amp (C-10), LNA (C-10) MT-GPR learns multi-performances [83] GPR + TS Amp-1,2,3 (C-23, C-43, C-21)…”
Section: B Other Problems and Discussionmentioning
confidence: 99%
“…Once a new meta-training instance and a corresponding label are available, the meta-training data is locked briefly to add the new instance. We found that the more common approach [58] to predict the label for a newly sampled instance with the current meta-model and adding both to the metatraining data does not work well for our scenario. Our label is only predicted and is thus only an approximation of the ground truth.…”
Section: Parallelization and Optmizationsmentioning
confidence: 90%
“…[2], dynamic timing analysis [3], and analog circuit synthesis!!FIXME!! [4], all of which require repeated executions of an expensive SPICE (simulation program with integrated circuit emphasis) simulation (due to the large scale of an IC design)!!FIXME!! [5].…”
mentioning
confidence: 99%
“…Due to its recent fast development, machine learning and other statistical learning methods have been utilized to resolve this challenge [7]. For instance, Bayesian optimization [4], multi-fidelity modelling [8], and computing budget allocation [9] are proposed to accelerate repeated simulations. Despite being efficient, direct machine learning implementations rely on a large amount of pre-computed data to work.…”
mentioning
confidence: 99%