The term Cleaner Production (CP) for Production Companies is contemplated as influential to get sustainable production. CP mainly deals with three R's that is, reuse, reduce, and recycle. For software enterprise, the software reuse plays a pivotal role. Software reuse is a process of producing new products or software from the existing software by updating it. To extract useful information from the existing software data mining comes into light. The algorithms used for software reuse face issues related to maintenance cost, accuracy, and performance. Also, the currently used algorithm does not give accurate results on whether the component of software can be reused. Machine Learning gives the best results to predicate if the given software component is reusable or not. This paper introduces an integrated Random Forest and Gradient Boosting Machine Learning Algorithm (RFGBM) which test the reusability of the given software code considering the object‐oriented parameters such as cohesion, coupling, cyclomatic complexity, bugs, number of children, and depth inheritance tree. Further, the proposed algorithm is compared with J48, AdaBoostM1, LogitBoost, Part, One R, LMT, JRip, DecisionStump algorithms. Performance metrices like accuracy, error rate, Relative Absolute Error, and Mean Absolute Error are improved using RFGBM. This algorithm also utilizes data preprocessing with the help of an unsupervised filter to remove the missing value for efficiency improvement. Proposed algorithm outperforms existing in term of performance parameters.
Healthcare organizations and Health Monitoring Systems generate large volumes of complex data, which offer the opportunity for innovative investigations in medical decision making. In this paper, we propose a beetle swarm optimization and adaptive neuro-fuzzy inference system (BSO-ANFIS) model for heart disease and multi-disease diagnosis. The main components of our analytics pipeline are the modified crow search algorithm, used for feature extraction, and an ANFIS classification model whose parameters are optimized by means of a BSO algorithm. The accuracy achieved in heart disease detection is $$99.1\%$$
99.1
%
with $$99.37\%$$
99.37
%
precision. In multi-disease classification, the accuracy achieved is $$96.08\%$$
96.08
%
with $$98.63\%$$
98.63
%
precision.
The results from both tasks prove the comparative advantage of the proposed BSO-ANFIS algorithm over the competitor models.
The complex and large-scale scientific workflow applications are effectively executes on the cloud. The performance of cloud computing highly depends on the task scheduling. Optimal workflow scheduling is still a challenge that needs to be addressed due to the conflicting objectives and increasing demand for quality of service. Task scheduling is an NP-hard problem due to its complexity.The newly introduced methods for resolving the problem of task scheduling are facing challenges to take the benefits of all aspects of cloud computing. In this article, we study the joint optimization of cost and makespan of scheduling workflows in infrastructure as a service clouds and propose a new workflow scheduling scheme using deep learning. In this scheme, a deep-Q learning-based heterogeneous earliest-finish-time (DQ-HEFT) algorithm is developed, which closely integrates the deep learning mechanism with the task scheduling heuristic HEFT. The workflowsim simulator is used for the experiment of the real-world and synthetic workflows. The experiment results demonstrate the efficiency of our proposed approach compared with existing algorithms. This technique can achieve significantly better makespan and speed metrics with a remarkably higher volume of data and can run faster compared with the existing workflow scheduling algorithms in cloud computing environment.
Edge computing technology has drawn the keystone of future intelligent transportation systems, especially in smart cities, because of processing data that are near to the user location present at the edge of the cloud server. Generally, in smart cities where distributed things have access to computational resources, data transfer becomes inevitable because of high latency, thereby resulting in crucial situations. Even though numerous technologies have emerged for improving the data communication among geo-distributed devices received from the cloud server but still, it lags in low learning performance. To address these challenges, an innovative artificial intelligence (AI) based edge node (E-Node) algorithm is implemented to optimize the edge to edge learning for well-organized data migration. To attain high reliability, AI-K-means neural network (KNN) and convolution neural network (CNN) is used initially for pre-processing and filtering the edge node. Further, the proposed E-Node algorithm outperforms the optimization technique effectively through the edge to edge computing The reliability performance is increased by reducing the node optimization time from 132 to 98 ms for 25 kb data and data transmission time is reduced from 93 to 45 ms for 80 kb data, thus reducing the latency in an edge envisioned environment.
INTRODUCTIONArtificial intelligent (AI) frameworks help to solve problems involving the learning abilities of computers, language processing, and identifying various speech in the environment in a highly reliable and accurate manner. 1 Smart cities concept endures the cities with intelligence and other concepts involving smart vehicles, smart grid, and smart healthcare using the concept of deep learning. Internet of Vehicles (IoV) 2,3 lays the foundation for several new transportation systems . 4 IoV helps to improve the efficiency of transportation, reducing the accident rate and reducing the energy consumption in vehicles being reduced. 5,6 The data generated from these smart vehicles are stored in the cloud. However, it is difficult to handle several request messages from the client and process it, during a particular grave learning task. The proposed system uses a new model to foil centralized cloud computing, which reduces the latency during communication and improves the quality through the provision of various resources 7 that are near to terminal devices for latency-sensitive tasks. The use of edge to edge cooperative AI increases the learning efficiency, quality of service. It also improves variousAbbreviations: CNN, convolution neural network; E-Node, edge node; KNN, K-means neural network.
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.