2018
DOI: 10.1109/tcsi.2017.2768826
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Bayesian Optimization Approach for Automated Optimization of Analog Circuits

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
98
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 189 publications
(99 citation statements)
references
References 31 publications
0
98
0
1
Order By: Relevance
“…In this section, we demonstrate the efficiency of our proposed optimization approach with two real-world analog circuits: power amplifier ( §5.1) and charge pump ( §5.2). We quantitatively compare our approach with three state-of-the-art algorithms: WEIBO [17], GASPAD [16] and DE [15]. The WEIBO method is a traditional single-fidelity GP-based BO approach with wEI function works as the acquisition function.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we demonstrate the efficiency of our proposed optimization approach with two real-world analog circuits: power amplifier ( §5.1) and charge pump ( §5.2). We quantitatively compare our approach with three state-of-the-art algorithms: WEIBO [17], GASPAD [16] and DE [15]. The WEIBO method is a traditional single-fidelity GP-based BO approach with wEI function works as the acquisition function.…”
Section: Resultsmentioning
confidence: 99%
“…To chart a way forward out of the dilemma, the GP-based Bayesian optimization (BO) approach has been proposed recently to combine both the model-based and simulation-based approach [17]. Generally, Bayesian optimization approach consists of two key elements: the surrogate model and the acquisition function [26].…”
Section: Introductionmentioning
confidence: 99%
“…The mean function m(x) can be any function, while the kernel function k(x i , x j ) has to make sure that the covariance matrix is a symmetric positive definite (SPD) matrix. In [2], constant mean function m(x) = µ 0 and the Gaussian kernel function…”
Section: Gaussian Process Regressionmentioning
confidence: 99%
“…Given a new input vector x, GPR model predicts the distribution of y ∼ N (µ(x), σ 2 (x)). The µ(x) and σ 2 (x) can be expressed as [2] µ (3), µ(x) can be viewed as the prediction, and σ 2 (x) can be viewed the confidence of the prediction. Denote θ as the vector of hyper parameters including the hyper parameters for the kernel function, and the additive noise.…”
Section: Gaussian Process Regressionmentioning
confidence: 99%
See 1 more Smart Citation