2008
DOI: 10.1109/tcad.2008.927758
|View full text |Cite
|
Sign up to set email alerts
|

Nonconvex Gate Delay Modeling and Delay Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…We assume 20% process variations for all gate sizes around their nominal values. An convex optimization software [12] was used to solve the final GP problem generalized in (25). The optimization objective is to minimize the total area i α i x i , where α i denotes the number of transistors in gate i, and gate size x i stands for the ratio of area of gate i to that of a minimum sized inverter.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We assume 20% process variations for all gate sizes around their nominal values. An convex optimization software [12] was used to solve the final GP problem generalized in (25). The optimization objective is to minimize the total area i α i x i , where α i denotes the number of transistors in gate i, and gate size x i stands for the ratio of area of gate i to that of a minimum sized inverter.…”
Section: Resultsmentioning
confidence: 99%
“…As pointed out in [25], signomial models can be more accurate to estimate gate delays. As opposed to posynomials, there are no restrictions on the sign of the multiplicative coefficients in signomials.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We claim that a local selection is fast and efficient since the gate delay has major impact only on its first level of fanins and fanouts and the circuit arrival times (which have a global dependency) were eliminated from the formulation through KKT conditions. Our algorithm obtained better convergence by combining the scheduling of a power weighting factor [Tennakoon and Sechen 2002] and the modified subgradient method [Tennakoon and Sechen 2008] for the first time in the discrete domain.…”
Section: Contributions Of This Workmentioning
confidence: 99%
“…The main advantage of optimizing within the continuous space is that it allows the use of consolidated optimization methods, that provide high-quality and efficient solutions [Boyd and Vandenberghe 2004]. Among the techniques assuming continuous parameters the following ones deserve to be highlighted: -sensitivity-based heuristics [Fishburn and Dunlop 1985;Srivastava et al 2004]; -Lagrangian relaxation combined with dynamic programming [Rahman et al 2011], or with greedy local sizing [Chen et al 1999;Hsinwei et al 2005;Tennakoon and Sechen 2008]; -linear programming applied to piecewise linear delay models [Berkelaar and Jess 1990] or for slack allocation [Nguyen et al 2003]; -convex optimization techniques [Kasamsetty et al 2000;Roy et al 2007]. …”
Section: Related Workmentioning
confidence: 99%