“…Additionally, ∆t Activity of reactive management is in the range of milliseconds (see example in section Reactive Approach) for reasonable sizes of NoC-tiles, so that the time saving per notification of activity (instead of waiting for a temperature change) may also be in the range of milliseconds (if ∆t compute is kept accordingly short). As it can be seen in (Wegner, Cornelius, Gag, Tockhorn, & Uhrmacher, 2010) this assumption is justified, since temperature modeling of a 2x2 NoC over 1 ms (using the same modeling accuracy as here) takes roughly 8.5 s. This results in 8.5 ms for every µs for four identical NoC-tiles and 2.125 ms for a single NoC tile (linearity assumed). For comparison, τ th for this example amounts to 6.3 ms. Due to a reduced ∆t Res and lowered traffic load, proactive management additionally implies two possible advantages compared to reactive approaches, provided that temperature can be influenced positively.…”
With the progress of deep submicron technology, power consumption and temperature related issues have become dominant factors for chip design. Therefore, very large-scale integrated systems like Systems-on-Chip (SoCs) are exposed to an increasing thermal stress. On the one hand, this necessitates effective mechanisms for thermal management. On the other hand, application of thermal management is accompanied by disturbance of system integrity and degradation of system performance. In this paper the authors propose to precompute and proactively manage on-chip temperature of systems based on Networks-on-Chip (NoCs). Thereby, traditional reactive approaches, utilizing the NoC infrastructure to perform thermal management, can be replaced. This results not only in shorter response times for application of management measures and a reduction of temperature and thermal imbalances, but also in less impairment of system integrity and performance. The systematic analysis of simulations conducted for NoC sizes ranging from 2x2 to 4x4 proves that under certain conditions the proactive approach is able to mitigate the negative impact of thermal management on system performance while still improving the on-chip temperature profile.
“…Additionally, ∆t Activity of reactive management is in the range of milliseconds (see example in section Reactive Approach) for reasonable sizes of NoC-tiles, so that the time saving per notification of activity (instead of waiting for a temperature change) may also be in the range of milliseconds (if ∆t compute is kept accordingly short). As it can be seen in (Wegner, Cornelius, Gag, Tockhorn, & Uhrmacher, 2010) this assumption is justified, since temperature modeling of a 2x2 NoC over 1 ms (using the same modeling accuracy as here) takes roughly 8.5 s. This results in 8.5 ms for every µs for four identical NoC-tiles and 2.125 ms for a single NoC tile (linearity assumed). For comparison, τ th for this example amounts to 6.3 ms. Due to a reduced ∆t Res and lowered traffic load, proactive management additionally implies two possible advantages compared to reactive approaches, provided that temperature can be influenced positively.…”
With the progress of deep submicron technology, power consumption and temperature related issues have become dominant factors for chip design. Therefore, very large-scale integrated systems like Systems-on-Chip (SoCs) are exposed to an increasing thermal stress. On the one hand, this necessitates effective mechanisms for thermal management. On the other hand, application of thermal management is accompanied by disturbance of system integrity and degradation of system performance. In this paper the authors propose to precompute and proactively manage on-chip temperature of systems based on Networks-on-Chip (NoCs). Thereby, traditional reactive approaches, utilizing the NoC infrastructure to perform thermal management, can be replaced. This results not only in shorter response times for application of management measures and a reduction of temperature and thermal imbalances, but also in less impairment of system integrity and performance. The systematic analysis of simulations conducted for NoC sizes ranging from 2x2 to 4x4 proves that under certain conditions the proactive approach is able to mitigate the negative impact of thermal management on system performance while still improving the on-chip temperature profile.
“…The authors in [6] propose heterogeneous modeling of synchronous reactive programs together with differential equations for modeling physical phenomena. VulcaNoCs [7] allows for modeling the functional behavior of Networks-on-Chips with cycle accurate SystemC-TLM, and relies on the Electrical Linear Network model of computation provided by SystemC-AMS [8] to implement the RC-Circuit modeling the thermal behavior. VulcaNoCs targets proactive thermal management.…”
Modern systems-on-chips need sophisticated power-management policies to control their power consumption and temperature. These power-management policies are usually implemented partly in software, with hardware support. They need to be validated early, hence power and temperature-aware simulation techniques at the system-level need to be developed. Existing approaches for system-level power and thermal analysis usually either completely abstract the functionality (allowing only simple scenarios to be simulated), or run the functional simulation independently from the non-functional one.The approach presented in this paper allows a coupled simulation of a SystemC/TLM model, possibly including the actual embedded software, with a power and temperature solver such as ATMI or the commercial tool ACEplorer. Power and temperature analysis is done based on the stimuli sent by the SystemC/TLM platform, which in turn can take decisions based on the non-functional simulation.
“…The combination of large computational power required for numerical simulators, in addition to detailed knowledge of the hardware as well as software make this approach unfeasible for design space exploration. A System-C based thermal simulator has recently been reported, but suffers from the same basic limitation as other simulators: the level of detailed information required for setting up the model is not easily available, see [20].…”
Abstract-Temperature plays an increasingly important role in the overall performance of a computing system and in its reliability. Increased availability of multi-and many-core systems provides an opportunity to manage the overall temperature profile of the system by cleverly designing the application-to-core mapping and the associated scheduling policies. There are clear penalties associated with an uncontrolled temperature profile: a core reaching a critical temperature usually activates built in shut down or voltage and/or frequency scaling mechanisms to cool it down, thereby leading to unplanned performance loss of the system. Similarly, deep thermal cycles with high frequency lead to severe deterioration in the overall reliability of the system. Design space exploration tools are often used to optimize binding and scheduling choices based on a given set of constraints and objectives. These exploration tools rely on fast and accurate temperature estimation techniques. We argue that the currently available techniques are not an ideal fit to design space exploration tools, and suggest a system level technique which is based on application fingerprinting. It does not need any information about the processor floorplan, the physical and thermal structure, or about power consumption. Instead, its temperature estimation is based on a set of application-specific calibration runs and associated temperature measurements using available built-in sensors. Using extensive experimental studies, we show that our technique can estimate temperature on all cores of a system to within 5 o C, and is three orders of magnitude faster than state of the art numerical simulators like Hotspot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.