The Internet has penetrated all aspects of human society and has promoted social progress. Cyber-crimes in many forms are commonplace and are dangerous to society and national security. Cybersecurity has become a major concern for citizens and governments. The Internet functions and software applications play a vital role in cybersecurity research and practice. Most of the cyber-attacks are based on exploits in system or application software. It is of utmost urgency to investigate software security problems. The demand for Wi-Fi applications is proliferating but the security problem is growing, requiring an optimal solution from researchers. To overcome the shortcomings of the wired equivalent privacy (WEP) algorithm, the existing literature proposed security schemes for Wi-Fi protected access (WPA)/WPA2. However, in practical applications, the WPA/WPA2 scheme still has some weaknesses that attackers exploit. To destroy a WPA/WPA2 security, it is necessary to get a PSK pre-shared key in pre-shared key mode, or an MSK master session key in the authentication mode. Brute-force cracking attacks can get a phase-shift keying (PSK) or a minimum shift keying (MSK). In real-world applications, many wireless local area networks (LANs) use the pre-shared key mode. Therefore, brute-force cracking of WPA/WPA2-PSK is important in that context. This article proposes a new mechanism to crack the Wi-Fi password using a graphical processing unit (GPU) and enhances the efficiency through parallel computing of multiple GPU chips. Experimental results show that the proposed algorithm is effective and provides a procedure to enhance the security of Wi-Fi networks.
Deep neural networks (DNN) are widely employed in a wide range of intelligent applications, including image and video recognition. However, due to the enormous amount of computations required by DNN. Therefore, performing DNN inference tasks locally is problematic for resourceconstrained Internet of Things (IoT) devices. Existing cloud approaches are sensitive to problems like erratic communication delays and unreliable remote server performance. The utilization of IoT device collaboration to create distributed and scalable DNN task inference is a very promising strategy. The existing research, on the other hand, exclusively looks at the static split method in the scenario of homogeneous IoT devices. As a result, there is a pressing need to investigate how to divide DNN tasks adaptively among IoT devices with varying capabilities and resource constraints, and execute the task inference cooperatively. Two major obstacles confront the aforementioned research problems: 1) In a heterogeneous dynamic multi-device environment, it is difficult to estimate the multi-layer inference delay of DNN tasks; 2) It is difficult to intelligently adapt the collaborative inference approach in real time. As a result, a multi-layer delay prediction model with fine-grained interpretability is proposed initially. Furthermore, for DNN inference tasks, evolutionary reinforcement learning (ERL) is employed to adaptively discover the approximate best split strategy. Experiments show that, in a heterogeneous dynamic environment, the proposed framework can provide considerable DNN inference acceleration. When the number of devices is 2, 3, and 4, the delay acceleration of the proposed algorithm is 1.81 times, 1.98 times and 5.28 times that of the EE algorithm, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.