“…Hence, early detection of Trojans particularly during gate-level netlist is of paramount importance. Machine learning algorithm can be implemented for efficient dynamic and static hardware Trojan detection [21]. Figure 8 shows the flow of learning at gate-level netlist.…”
Section: Machine Learning In Gate-level Netlistmentioning
In the past few decades, machine learning, a subset of artificial intelligence (AI), has emerged as a disruptive technology which is now being extensively used and has stretched across various domains. Among the numerous applications, one of the most significant advancements due to Machine Learning is in the field of Very Large Scale Integrated Circuits (VLSI). Further growth and improvements in this field are highly anticipated in the near future. The fabrication of thousands of transistors in VLSI is time consuming and complex which demanded the automation of design process, and hence, computer-aided design (CAD) tools and technologies have started to evolve. The incorporation of machine learning in VLSI involves the application of machine learning algorithms at different abstraction levels of VLSI CAD. In this paper, we summarize several machine learning algorithms that have been developed and are being widely used. We also have briefly discussed about how machine learning methods have transuded the layers of VLSI design process from register transfer level (RTL) assertion generation to static timing analysis (STA) with smart and efficient models and methodologies, further enhancing the quality of chip design with power, performance and area improvements and complexity and turnaround time reduction.
“…Hence, early detection of Trojans particularly during gate-level netlist is of paramount importance. Machine learning algorithm can be implemented for efficient dynamic and static hardware Trojan detection [21]. Figure 8 shows the flow of learning at gate-level netlist.…”
Section: Machine Learning In Gate-level Netlistmentioning
In the past few decades, machine learning, a subset of artificial intelligence (AI), has emerged as a disruptive technology which is now being extensively used and has stretched across various domains. Among the numerous applications, one of the most significant advancements due to Machine Learning is in the field of Very Large Scale Integrated Circuits (VLSI). Further growth and improvements in this field are highly anticipated in the near future. The fabrication of thousands of transistors in VLSI is time consuming and complex which demanded the automation of design process, and hence, computer-aided design (CAD) tools and technologies have started to evolve. The incorporation of machine learning in VLSI involves the application of machine learning algorithms at different abstraction levels of VLSI CAD. In this paper, we summarize several machine learning algorithms that have been developed and are being widely used. We also have briefly discussed about how machine learning methods have transuded the layers of VLSI design process from register transfer level (RTL) assertion generation to static timing analysis (STA) with smart and efficient models and methodologies, further enhancing the quality of chip design with power, performance and area improvements and complexity and turnaround time reduction.
“…It has been shown that machine learning predictions of circuit speedpath [10] and signoff timing [11] are feasible. Recently, Ye et al [12] developed a support vector machine based regression method to predict circuit delay at runtime without PSN consideration.…”
Section: Existing Solution For Psn-aware Timing Analysismentioning
confidence: 99%
“…Here, t i is the ith data point (called target) Gradient descent is a common method to minimise the error function [14]. The gradient descent method updates parameters iteratively, described in (11), where τ is known as the step of minimisation and η is the learning rate…”
Excessive power supply noise (PSN) such as IR drop can cause yield loss when testing very large scale integration chips. However, simulation of circuit timing with PSN is not an easy task. In this study, the authors predict circuit timing for all test patterns using three machine learning techniques, neural network (NN), support vector regression (SVR), and least-square boosting (LSBoost). To reduce the huge dimension of raw data, they propose four feature extractions: input/output transition (IOT), flip-flop transition in window (FFTW), switching activity in window (SAW), and terminal FF transition of long paths (PATH). SAW and FFTW are physical-aware features while PATH is a timing-aware feature. Their experimental results on leon3mp benchmark circuit (638 K gates, 2 K test patterns) show that, compared with the simple IOT method, SAW effectively reduced the dimension by up to 472 times, without significant impact on prediction accuracy [correlation coefficient = 0.79]. Their results show that NN has best prediction accuracy and SVR has the least under-prediction. LSBoost uses the least memory. The proposed method is more than six orders of magnitude faster than traditional circuit simulation tools.
“…Recently, the learning-based method has been widely used in all kinds of fields [28][29][30], such as optical, image processing and also the Electronics Design Automation (EDA) field, especially timing analysis, and has shown great potential [31][32][33]. Das et al [31] build a model that still focuses the cell delay model by a learning-based method that comprehensively captures process, voltage, and temperature, along with input slew and output load, but it is not suited for path delay variation prediction directly.…”
Section: Introductionmentioning
confidence: 99%
“…Das et al [31] build a model that still focuses the cell delay model by a learning-based method that comprehensively captures process, voltage, and temperature, along with input slew and output load, but it is not suited for path delay variation prediction directly. Kahng et al [32] use a machine learning method to solve the signal integrity (SI) timing problems, which is based on the timing reports from the non-SI mode. It is robust across designs and signoff constraints.…”
Path delay variation becomes a serious concern in advanced technology, especially for multi-corner conditions. Plenty of timing analysis methods have been proposed to solve the issue of path delay variation, but they mainly focus on every single corner and are based on a characterized timing library, which neglects the correlation among multiple corners, resulting in a high characterization effort for all required corners. Here, a novel prediction framework is proposed for path delay variation by employing a learning-based method using back propagation (BP) regression. It can be used to solve the issue of path delay variation prediction under a single corner. Moreover, for multi-corner conditions, the proposed framework can be further expanded to predict corners that are not included in the training set. Experimental results show that the proposed model outperforms the traditional Advanced On-Chip Variation (AOCV) method with 1.4X improvement for the prediction of path delay variation for single corners. Additionally, while predicting new corners, the maximum error is 4.59%, which is less than current state-of-the-art works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.