The instances of privacy and security have reached the point where they cannot be ignored. There has been a rise in data breaches and fraud, particularly in banks, healthcare, and government sectors. In today’s world, many organizations offer their security specialists bug report programs that help them find flaws in their applications. The breach of data on its own does not necessarily constitute a threat or attack. Cyber-attacks allow cyberpunks to gain access to machines and networks and steal financial data and esoteric information as a result of a data breach. In this context, this paper proposes an innovative approach to help users to avoid online subterfuge by implementing a Dynamic Phishing Safeguard System (DPSS) using neural boost phishing protection algorithm that focuses on phishing, fraud, and optimizes the problem of data breaches. Dynamic phishing safeguard utilizes 30 different features to predict whether or not a website is a phishing website. In addition, the neural boost phishing protection algorithm uses an Anti-Phishing Neural Algorithm (APNA) and an Anti-Phishing Boosting Algorithm (APBA) to generate output that is mapped to various other components, such as IP finder, geolocation, and location mapper, in order to pinpoint the location of vulnerable sites that the user can view, which makes the system more secure. The system also offers a website blocker, and a tracker auditor to give the user the authority to control the system. Based on the results, the anti-phishing neural algorithm achieved an accuracy level of 97.10%, while the anti-phishing boosting algorithm yielded 97.82%. According to the evaluation results, dynamic phishing safeguard systems tend to perform better than other models in terms of uniform resource locator detection and security.
The quality-control process in manufacturing must ensure the product is free of defects and performs according to the customer’s expectations. Maintaining the quality of a firm’s products at the highest level is very important for keeping an edge over the competition. To maintain and enhance the quality of their products, manufacturers invest a lot of resources in quality control and quality assurance. During the assembly line, parts will arrive at a constant interval for assembly. The quality criteria must first be met before the parts are sent to the assembly line where the parts and subparts are assembled to get the final product. Once the product has been assembled, it is again inspected and tested before it is delivered to the customer. Because manufacturers are mostly focused on visual quality inspection, there can be bottlenecks before and after assembly. The manufacturer may suffer a loss if the assembly line is slowed down by this bottleneck. To improve quality, state-of-the-art sensors are being used to replace visual inspections and machine learning is used to help determine which part will fail. Using machine learning techniques, a review of quality assessment in various production processes is presented, along with a summary of the four industrial revolutions that have occurred in manufacturing, highlighting the need to detect anomalies in assembly lines, the need to detect the features of the assembly line, the use of machine learning algorithms in manufacturing, the research challenges, the computing paradigms, and the use of state-of-the-art sensors in Industry 4.0.
Technology plays a significant role in our daily lives as real-time applications and services such as video surveillance systems and the Internet of Things (IoT) are rapidly developing. With the introduction of fog computing, a large amount of processing has been done by fog devices for IoT applications. However, a fog device’s reliability may be affected by insufficient resources at fog nodes, which may fail to process the IoT applications. There are obvious maintenance challenges associated with many read-write operations and hazardous edge environments. To increase reliability, scalable fault-predictive proactive methods are needed that predict the failure of inadequate resources of fog devices. In this paper, a Recurrent Neural Network (RNN)-based method to predict proactive faults in the event of insufficient resources in fog devices based on a conceptual Long Short-Term Memory (LSTM) and novel Computation Memory and Power (CRP) rule-based network policy is proposed. To identify the precise cause of failure due to inadequate resources, the proposed CRP is built upon the LSTM network. As part of the conceptual framework proposed, fault detectors and fault monitors prevent the outage of fog nodes while providing services to IoT applications. The results show that the LSTM along with the CRP network policy method achieves a prediction accuracy of 95.16% on the training data and a 98.69% accuracy on the testing data, which significantly outperforms the performance of existing machine learning and deep learning techniques. Furthermore, the presented method predicts proactive faults with a normalized root mean square error of 0.017, providing an accurate prediction of fog node failure. The proposed framework experiments show a significant improvement in the prediction of inaccurate resources of fog nodes by having a minimum delay, low processing time, improved accuracy, and the failure rate of prediction was faster in comparison to traditional LSTM, Support Vector Machines (SVM), and Logistic Regression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.