Accurate modeling of delay, power, and area of interconnections early in the design phase is crucial for effective system-level optimization. Models presently used in system-level optimizations, such as network-on-chip (NoC) synthesis, are inaccurate in the presence of deep-submicron effects. In this paper, we propose new, highly accurate models for delay and power in buffered interconnects; these models are usable by system-level designers for existing and future technologies. We present a general and transferable methodology to construct our models from a wide variety of reliable sources (Liberty, LEF/ITF, ITRS, PTM, etc.). The modeling infrastructure, and a number of characterized technologies, are available as open source. Our models comprehend key interconnect circuit and layout design styles, and a power-efficient buffering technique that overcomes unrealities of previous delay-driven buffering techniques. We show that our models are significantly more accurate than previous models for global and intermediate buffered interconnects in 90nm and 65nm foundry processes -essentially matching signoff analyses. We also integrate our models in the COSI-OCC synthesis tool and show that the more accurate modeling significantly affects optimal/achievable architectures that are synthesized by the tool. The increased accuracy provided by our models enables system-level designers to obtain better assessments of the achievable performance/power/area tradeoffs for (communication-centric aspects of) system design, with negligible setup and overhead burdens. IntroductionDue to increasing complexity of Systems-on-Chip (SoCs) and poor scaling of interconnects with technology, on-chip communication is becoming a performance bottleneck and a significant consumer of power and area budgets [11,28]. Decisions made in the early stages of the design process have the maximum potential to optimize the system for objectives such as power [21]. Therefore, in order to drive meaningful optimizations and to reduce guardbanding, it is crucial to account for interconnects during system-level design by modeling their performance, power, and area.During system design, organizational and technological choices are performed. At this stage, we are concerned with implementing the hardware architecture sketched in the conceptualization and modeling steps. Design is supported by hardware synthesis tools and software compilers. Energy efficiency can be obtained by leveraging the degrees of freedom of the underlying hardware technology. Even within the framework of a standard implementation technology, there are ample possibilities for reducing power consumption. System-level decisions affect primarily the global interconnects by setting their lengths, bit widths, and speed requirements. Local interconnects are typically less affected as they are either already routed in IP blocks or are routed by automatic back-end routing tools.This paper focuses on interconnect delay, power, and area models that are usable by the system-level designer at an early p...
Vectorless power grid verification algorithms, by solving linear programming (LP) problems under current constraints, enable worstcase voltage drop predictions at an early design stage. However, worst-case current patterns obtained by many existing vectorless algorithms are time-invariant (i.e., are constant throughout the simulation time), which may result in an overly pessimistic voltage drop prediction. In this paper, a more realistic power grid verification algorithm based on hierarchical current and power constraints is proposed. The proposed algorithm naturally handles general RCL power grid models. Currents at different time steps are treated as independent variables and additional power constraints are introduced; this results in more realistic time-varying worstcase current patterns and less pessimistic worst-case voltage drop predictions. Moreover, a sorting-deletion algorithm is proposed to speed up solving LP problems by utilizing the hierarchical constraint structure. Experimental results confirm that worst-case current patterns and voltage drops obtained by the proposed algorithm are more realistic, and that the sorting-deletion algorithm reduces runtime needed to solve LP problems by > 85%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.