Background Despite excellent prediction performance, noninterpretability has undermined the value of applying deep-learning algorithms in clinical practice. To overcome this limitation, attention mechanism has been introduced to clinical research as an explanatory modeling method. However, potential limitations of using this attractive method have not been clarified to clinical researchers. Furthermore, there has been a lack of introductory information explaining attention mechanisms to clinical researchers. Objective The aim of this study was to introduce the basic concepts and design approaches of attention mechanisms. In addition, we aimed to empirically assess the potential limitations of current attention mechanisms in terms of prediction and interpretability performance. Methods First, the basic concepts and several key considerations regarding attention mechanisms were identified. Second, four approaches to attention mechanisms were suggested according to a two-dimensional framework based on the degrees of freedom and uncertainty awareness. Third, the prediction performance, probability reliability, concentration of variable importance, consistency of attention results, and generalizability of attention results to conventional statistics were assessed in the diabetic classification modeling setting. Fourth, the potential limitations of attention mechanisms were considered. Results Prediction performance was very high for all models. Probability reliability was high in models with uncertainty awareness. Variable importance was concentrated in several variables when uncertainty awareness was not considered. The consistency of attention results was high when uncertainty awareness was considered. The generalizability of attention results to conventional statistics was poor regardless of the modeling approach. Conclusions The attention mechanism is an attractive technique with potential to be very promising in the future. However, it may not yet be desirable to rely on this method to assess variable importance in clinical settings. Therefore, along with theoretical studies enhancing attention mechanisms, more empirical studies investigating potential limitations should be encouraged.
BACKGROUND Intraoperative hypotension (IOH) is associated with an increased risk of postoperative complications. Therefore, in recent years, various models for IOH prediction based on high-dimensional signal data have been developed. Given that the association between the high-dimensionality of data and the overfitting problem, it is very important to establish a strategy to prevent the overfitting problem. However, there has been little discussion of the strategy. OBJECTIVE This work aimed to develop an overfitting-resistant deep learning model that uses preoperative patient data along with intraoperative bio-signal information to predict the IOH about 5 minutes prior to its occurrence. METHODS Mean arterial blood pressure (2 sec interval) and electronic medical records of 990 patients from open-source database, VitalDB were integrated for this study. The IOH was defined as an MBP < 65 mmHg for >1 min. Our proposed deep learning model accommodates the dropout method for preventing overfitting and the permutation method for reducing the dependence of the American Society of Anesthesiologists (ASA) status on IOH; we permuted the ASA status in the process of model training. The primary outcome was evaluated in terms of the area under the receiver operating characteristic curve (AUROC). RESULTS The model with the permutation method showed better performance (AUROC, 95% confidence interval [CI]: 0.842, 0.838-0.845) than that of model without the permutation method (AUROC, 95% CI: 0.830, 0.825-0.835). Furthermore, the model with both the permutation and dropout methods exhibited the best performance (AUROC, 95% CI: 0.862, 0.859-0.861). CONCLUSIONS Our work demonstrated the effectiveness of the permutation method in preventing the overfitting problem. Ultimately, the introduction of the permutation of the ASA status and dropout methods into a deep learning model can prevent the overfitting problem and improve the accuracy of IOH prediction.
BACKGROUND Despite excellent prediction performance, non-interpretability has undermined the value of applying deep learning algorithms in clinical practice. To overcome this limitation, an explanatory modeling method called attention mechanism has been introduced to clinical research. However, gentle guidance and precautions for using this attractive method have not been well provided to clinical and informatics researchers. Furthermore, there has been a lack of discussion on the predictive and interpretive performance of this method when applied to health data. OBJECTIVE The purpose of this study is to provide clinical researchers with the basic concepts and design approaches of attention mechanisms. In addition, the study aims to evaluate current design approaches of attention mechanisms in terms of prediction and interpretability performance. METHODS First, the basic concepts and several key considerations regarding attention mechanisms are provided. Second, the four approaches to attention mechanisms are introduced according to a two-dimensional framework based on degree of freedom and uncertainty awareness. Third, 1) prediction performance, 2) probability reliability, 3) concentration of variable importance, 4) consistency of attention results, and 5) generalizability of attention results to conventional statistics, are assessed in the diabetic classification modeling setting. Fourth, the performances of the four attention design approaches are discussed. RESULTS Prediction performance was very high for all models. Probability reliability was high in models with a high degree of freedom. Variable importance was concentrated in several variables when uncertainty awareness was not considered. Consistency of attention results was high when uncertainty awareness was considered. The generalizability of attention results to conventional statistics was poor regardless of the modeling approach. CONCLUSIONS The attention mechanism is obviously an attractive technique, which could be very promising in the future. However, naive attention implementations may lead to poor results when determining variable importance. Therefore, more robust theoretical studies of attention mechanisms should be encouraged.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.