SystemC is a widespread language for developing SoC designs. Unfortunately, most SystemC simulators are based on a strictly sequential scheduler that heavily limits their performance, impacting verification schedules and time-to-market of new designs. Parallelizing SystemC simulation entails a complete re-design of the simulator kernel for the specific target parallel architectures. This paper proposes an automatic methodology to generate a parallel SystemC simulator kernel, exploiting the massive parallelism of GP-GPU architectures. Our solution leverages static scheduling to reduce synchronization overheads. The generated simulator code targets both CUDA and OpenCL libraries, to boost scalability and provide support for multiple GP-GPU architectures. Finally, the paper compares the performance of our solution on CUDA vs. OpenCL platforms, with the goal of investigating advantages and drawbacks that the two thread management libraries offer to concurrent SystemC simulation.
SystemC has recently been extended with the Analogue and Mixed Signal (AMS) library, with the ultimate goal of providing simulation support to analogue electronics and continuous time behaviours. SystemC-AMS allows modelling of systems that are either conservative and extremely low level or continuous time and behavioural, which is limited compared to other AMS HDLs. This work faces up this challenge, by extending SystemC-AMS support to a new level of abstraction, called Analogue Behavioural Modelling (ABM), covering models that are both behavioural and conservative. This leads to a methodology that uses SystemC-AMS constructs in a novel way. Full automation of the methodology allows proof of its effectiveness both in terms of accuracy and simulation performance, and application of the overall approach to a complex industrial Micro Electro-Mechanical System (MEMS) case study. The effectiveness of the proposed approach is further highlighted in the context of virtual platforms for smart systems, as adopting a C++-based language for MEMS simulation reduces the simulation time by about 2x, thus enhancing the design and integration flow.
SystemC is the de-facto standard language for system-level modeling, architectural exploration, performance analysis, software development, and functional verification of embedded systems. Nevertheless, it has been proved that the performance of the SystemC implementation is typically less optimal than commercial VHDL/Verilog simulators when used for register transfer level (RTL) simulation. This is mainly due to the "slow" implementation of bit-accurate data types provided by the standard library. Such a problem limits the simulation performance even when SystemC designs are implemented at higher levels of abstraction (i.e., transaction-level modeling-TLM) and still make use of bit-accurate data types (e.g., for a more accurate verification, or in TLM descriptions automatically generated from RTL). This article presents HDTLib, a new bit-accurate data type library that increases the simulation speed up to 3.45× at RTL and up to 10× at TLM. In addition, when the level of abstraction rises from RTL and better simulation performance is required, accuracy of HW-dependent behaviors is no longer necessary. Thus, the article presents a type abstraction methodology to get rid of low level behaviors and how such a methodology can be combined with HDTLib for guaranteeing a sound tradeoff between accuracy and simulation speed. Finally, more recent works have proposed efficient and promising techniques to boost SystemC simulation through general purpose graphics processing unit (GP-GPU) architectures. In such parallel frameworks, the standard SystemC library for bit-accurate data types
The aging of rechargeable batteries, with its associated replacement costs, is one of the main issues limiting the diffusion of electric vehicles (EVs) as the future transportation infrastructure. An effective way to mitigate battery aging is to act on its charge cycles, more controllable than discharge ones, implementing so-called battery-aware charging protocols. Since one of the main factors affecting battery aging is its average state of charge (SOC), these protocols try to minimize the standby time, i.e., the time interval between the end of the actual charge and the moment when the EV is unplugged from the charging station. Doing so while still ensuring that the EV is fully charged when needed (in order to achieve a satisfying user experience) requires a “just-in-time” charging protocol, which completes exactly at the plug-out time. This type of protocol can only be achieved if an estimate of the expected plug-in duration is available. While many previous works have stressed the importance of having this estimate, they have either used straightforward forecasting methods, or assumed that the plug-in duration was directly indicated by the user, which could lead to sub-optimal results. In this paper, we evaluate the effectiveness of a more advanced forecasting based on machine learning (ML). With experiments on a public dataset containing data from domestic EV charge points, we show that a simple tree-based ML model, trained on each charge station based on its users’ behaviour, can reduce the forecasting error by up to 4× compared to the simple predictors used in previous works. This, in turn, leads to an improvement of up to 50% in a combined aging-quality of service metric.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.