“…Additionally, factors resulting in false negative errors also produce a lower AUC value, especially in predictive studies; for example, a shorter follow-up time means a lower probability for a model to learn positive events and thereby causes data unbalance and false negative errors [41]. Moreover, overfitting for ML models is still a common problem, which may result in failure to generate true predictions for unseen datasets and lead to lower AUC values [42]. In this study, the AUC of the studied ML models is modest, which may be caused by missing values, outliers, sample size, and follow-up time.…”
(1) Background: Patients with acute myocardial infarction (AMI) still experience many major adverse cardiovascular events (MACEs), including myocardial infarction, heart failure, kidney failure, coronary events, cerebrovascular events, and death. This retrospective study aims to assess the prognostic value of machine learning (ML) for the prediction of MACEs. (2) Methods: Five-hundred patients diagnosed with AMI and who had undergone successful percutaneous coronary intervention were included in the study. Logistic regression (LR) analysis was used to assess the relevance of MACEs and 24 selected clinical variables. Six ML models were developed with five-fold cross-validation in the training dataset and their ability to predict MACEs was compared to LR with the testing dataset. (3) Results: The MACE rate was calculated as 30.6% after a mean follow-up of 1.42 years. Killip classification (Killip IV vs. I class, odds ratio 4.386, 95% confidence interval 1.943–9.904), drug compliance (irregular vs. regular compliance, 3.06, 1.721–5.438), age (per year, 1.025, 1.006–1.044), and creatinine (1 µmol/L, 1.007, 1.002–1.012) and cholesterol levels (1 mmol/L, 0.708, 0.556–0.903) were independent predictors of MACEs. In the training dataset, the best performing model was the random forest (RDF) model with an area under the curve of (0.749, 0.644–0.853) and accuracy of (0.734, 0.647–0.820). In the testing dataset, the RDF showed the most significant survival difference (log-rank p = 0.017) in distinguishing patients with and without MACEs. (4) Conclusions: The RDF model has been identified as superior to other models for MACE prediction in this study. ML methods can be promising for improving optimal predictor selection and clinical outcomes in patients with AMI.
“…Additionally, factors resulting in false negative errors also produce a lower AUC value, especially in predictive studies; for example, a shorter follow-up time means a lower probability for a model to learn positive events and thereby causes data unbalance and false negative errors [41]. Moreover, overfitting for ML models is still a common problem, which may result in failure to generate true predictions for unseen datasets and lead to lower AUC values [42]. In this study, the AUC of the studied ML models is modest, which may be caused by missing values, outliers, sample size, and follow-up time.…”
(1) Background: Patients with acute myocardial infarction (AMI) still experience many major adverse cardiovascular events (MACEs), including myocardial infarction, heart failure, kidney failure, coronary events, cerebrovascular events, and death. This retrospective study aims to assess the prognostic value of machine learning (ML) for the prediction of MACEs. (2) Methods: Five-hundred patients diagnosed with AMI and who had undergone successful percutaneous coronary intervention were included in the study. Logistic regression (LR) analysis was used to assess the relevance of MACEs and 24 selected clinical variables. Six ML models were developed with five-fold cross-validation in the training dataset and their ability to predict MACEs was compared to LR with the testing dataset. (3) Results: The MACE rate was calculated as 30.6% after a mean follow-up of 1.42 years. Killip classification (Killip IV vs. I class, odds ratio 4.386, 95% confidence interval 1.943–9.904), drug compliance (irregular vs. regular compliance, 3.06, 1.721–5.438), age (per year, 1.025, 1.006–1.044), and creatinine (1 µmol/L, 1.007, 1.002–1.012) and cholesterol levels (1 mmol/L, 0.708, 0.556–0.903) were independent predictors of MACEs. In the training dataset, the best performing model was the random forest (RDF) model with an area under the curve of (0.749, 0.644–0.853) and accuracy of (0.734, 0.647–0.820). In the testing dataset, the RDF showed the most significant survival difference (log-rank p = 0.017) in distinguishing patients with and without MACEs. (4) Conclusions: The RDF model has been identified as superior to other models for MACE prediction in this study. ML methods can be promising for improving optimal predictor selection and clinical outcomes in patients with AMI.
“…For example, Sewak et al analyzed different types of LSTM architectures for Intrusion Detection Systems and demonstrated the benefits of hyper-parameter tuning in LSTM models [ 27 ]. Software defect prediction can be used in many of the fields of engineering described [ 28 ] and it can be used to compare Machine Learning and Statistical methods for classification fault and non-fault classes. Internet of Things (IOT) was used to automate applications for our needs.…”
Software defect prediction studies aim to predict defect-prone components before the testing stage of the software development process. The main benefit of these prediction models is that more testing resources can be allocated to fault-prone modules effectively. While a few software defect prediction models have been developed for mobile applications, a systematic overview of these studies is still missing. Therefore, we carried out a Systematic Literature Review (SLR) study to evaluate how machine learning has been applied to predict faults in mobile applications. This study defined nine research questions, and 47 relevant studies were selected from scientific databases to respond to these research questions. Results show that most studies focused on Android applications (i.e., 48%), supervised machine learning has been applied in most studies (i.e., 92%), and object-oriented metrics were mainly preferred. The top five most preferred machine learning algorithms are Naïve Bayes, Support Vector Machines, Logistic Regression, Artificial Neural Networks, and Decision Trees. Researchers mostly preferred Object-Oriented metrics. Only a few studies applied deep learning algorithms including Long Short-Term Memory (LSTM), Deep Belief Networks (DBN), and Deep Neural Networks (DNN). This is the first study that systematically reviews software defect prediction research focused on mobile applications. It will pave the way for further research in mobile software fault prediction and help both researchers and practitioners in this field.
“…However, the constantly increasing software requirements complexity led, on the one hand, to the emergence of technologies such as Kubernetes (Lukša, 2017) for deploying and scaling applications and, on the other hand, must force researchers and practitioners more actively use artificial intelligence technologies to overcome challenges. A typical example of this trend is found in the field of Search-Based Software Engineering (e.g., Harman & Chicano, 2015;Ruchika et al, 2017;Ramí rez et al, 2019), as well as works in the field of using probabilistic reasoning and machine learning in the software life cycle (Balikuddembe et al, 2009;Pandey et al, 2021;Jayagopal et al, 2021;Xu et al, 2016;Dell' Anna et al, 2019). The most popular intelligent techniques for software development are as follows: reasoning under uncertainty (mainly, Bayesian network), search-based solutions, and machine learning (Perkusich et al, 2020).…”
Section: Common Situation and Trends In The Agile Software Developmentmentioning
This article is devoted to the analysis of the situation that has arisen in the practice of using artificial intelligence methods for software development. Nowadays there are many disparate approaches, models, and practices based on the use of narrow intelligence for decision-making at different stages of the life cycle of software products, and an almost complete lack of solutions brought to wide practical use. The article provides a comprehensive overview of the main reasons for the lack of the expected effect from the implementation of Agile and suggests a way to solve this problem based on the use of a self-organizing knowledge model. Based on the heuristic usage of transcendental logic in the terms of "ontological predicates", such a model makes it possible to create a formalism of the semantic representation of the requirements architecture of a software project, which could provide semantic interoperability and an executable semantic framework for automated ontology generation from unstructured informal software requirements text. The main benefit of this model is that it is flexible and ensures the accumulation of knowledge without the need to change the initial infrastructure as well as that the ontology inference engine is the part of the mechanism of collective interaction of active elements of knowledge and not some externally programmed system of rules that imitate the process of thinking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.