Abstract:In this paper, we present a new, simple, accurate and fast power estimation technique that can be used to explore the power consumption of digital system designs at an early design stage. We exploit the machine learning techniques to aid the designers in exploring the design space of possible architectural solutions, and more specifically, their dynamic power consumption, which is application, technology, frequency and data stimuli dependent. To model the power and the behavior of digital components, we adopt … Show more
“…An influential example of Node.js's impact on performance is LinkedIn's transition of their mobile backend servers from Ruby on Rails to Node.js in 2012. This shift resulted in a significant improvement in server performance during testing [13].…”
In this work, we demonstrate the capability of a JavaScript-based web crawler to overcome anti-crawling measures such as CAPTCHAs and IP blocking. we delved into the ethical and legal dimensions of web crawling and provide recommendations for future research endeavors in this domain. A web crawler, being an automated software program, can navigate through websites and extract information. While it serves purposes like website analysis and indexing, it can also be misused for extracting personal data, scraping content, and overloading servers. Website administrators often employ anti-crawling techniques, such as CAPTCHAs, Robot.txt, and IP blocking, to thwart malicious web crawlers from accessing their content. These techniques aim to curtail the ability of a web crawler to scrape, access, or overload website resources, without impeding legitimate users from accessing the necessary content. The objective of this study is to demonstrate and enhance the resilience of legitimate web crawlers against anti-crawling techniques like CAPTCHAs and IP blocking, challenging the notion that these measures are universally effective against all types of web crawlers.
“…An influential example of Node.js's impact on performance is LinkedIn's transition of their mobile backend servers from Ruby on Rails to Node.js in 2012. This shift resulted in a significant improvement in server performance during testing [13].…”
In this work, we demonstrate the capability of a JavaScript-based web crawler to overcome anti-crawling measures such as CAPTCHAs and IP blocking. we delved into the ethical and legal dimensions of web crawling and provide recommendations for future research endeavors in this domain. A web crawler, being an automated software program, can navigate through websites and extract information. While it serves purposes like website analysis and indexing, it can also be misused for extracting personal data, scraping content, and overloading servers. Website administrators often employ anti-crawling techniques, such as CAPTCHAs, Robot.txt, and IP blocking, to thwart malicious web crawlers from accessing their content. These techniques aim to curtail the ability of a web crawler to scrape, access, or overload website resources, without impeding legitimate users from accessing the necessary content. The objective of this study is to demonstrate and enhance the resilience of legitimate web crawlers against anti-crawling techniques like CAPTCHAs and IP blocking, challenging the notion that these measures are universally effective against all types of web crawlers.
“…On-board power measurement refers to measuring the power consumption of the device's motherboard. Power consumption has become a critical metric in various electronic devices for modern and critical applications [37]. It is possible to divide the power consumption into two main components:…”
Hardware joined the battle against malware by introducing secure boot architectures, malware-aware processors, and trusted platform modules. Hardware performance indicators, power profiles, and side channel information can be leveraged at run-time via machine learning for continuous monitoring and protection. The explainability of these machine learning algorithms may play a crucial role in interpreting their results and avoiding false positives. In this paper, we present an eagle eye on the state of the art of these components: we examine secure architectures and malware-aware processors such as those implemented in the RISC-V Instruction Set Architecture and Reduced Instruction Set Computer (RISC). We categorize hardware-assisted solutions increased by machine learning for classification. We survey recently proposed software-assisted and hardware-assisted explainability algorithms in our context. In the discussion, we suggest that (1) safe architectures that guarantee secure device boot are a must, (2) Sidechannel approaches are challenging to integrate into embedded systems, yet they show promise in terms of efficiency, (3) malware-aware processors provide valuable features for malware detection software, and (4) Without explainability, malware detection software is error-prone and can be easily bypassed.
“…Instead of considering the switching activities at the circuit level, the authors in [20] propose NeuPow, a neural networkbased approach that works at the component level. It uses Artificial Neural Networks (ANNs) to model the power and signal propagation behavior of the considered components.…”
In the quest for precise power estimation during the early phases of design, the absence of a Standard Parasitic Exchange File (SPEF) with interconnect R/C values poses a significant hurdle. To address this challenge, we introduce a Machine Learning (ML) approach designed to predict net power metrics at the Gate Level without relying on SPEF. Net features are extracted from Electronic Design Automation (EDA) tools, facilitating the training of models for the prediction of Net Switching Power. Notably, the Random Forest model emerges as the most effective, achieving high accuracy by reducing power error from around 20% to a mere 0.1%.Furthermore, our innovative approach enhances efficiency by bypassing the traditional SPEF generation process. This results in a significant 5x reduction in runtime compared to the conventional flow, with a notable decrease from 163.6 minutes to just 33.7 minutes. This substantial acceleration is achieved by skipping the time-intensive synthesis and physical design steps required for SPEF.In summary, our ML-based methodology not only achieves swift and accurate power estimation in the early stages of design but also liberates the process from the constraints of SPEF dependency. This marks a transformative shift in the landscape of power estimation methodologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.