OBJECTIVES Machine learning (ML) has great potential, but there are few examples of its implementation improving outcomes. The thoracic surgeon must be aware of pertinent ML literature and how to evaluate this field for the safe translation to patient care. This scoping review provides an introduction to ML applications specific to the thoracic surgeon. We review current applications, limitations and future directions. METHODS A search of the PubMed database was conducted with inclusion requirements being the use of an ML algorithm to analyse patient information relevant to a thoracic surgeon and contain sufficient details on the data used, ML methods and results. Twenty-two papers met the criteria and were reviewed using a methodological quality rubric. RESULTS ML demonstrated enhanced preoperative test accuracy, earlier pathological diagnosis, therapies to maximize survival and predictions of adverse events and survival after surgery. However, only 4 performed external validation. One demonstrated improved patient outcomes, nearly all failed to perform model calibration and one addressed fairness and bias with most not generalizable to different populations. There was a considerable variation to allow for reproducibility. CONCLUSIONS There is promise but also challenges for ML in thoracic surgery. The transparency of data and algorithm design and the systemic bias on which models are dependent remain issues to be addressed. Although there has yet to be widespread use in thoracic surgery, it is essential thoracic surgeons be at the forefront of the eventual safe introduction of ML to the clinic and operating room.
This paper presents the selective use of eye-gaze information in learning human actions in Atari games. Vast evidence suggests that our eye movement convey a wealth of information about the direction of our attention and mental states and encode the information necessary to complete a task. Based on this evidence, we hypothesize that selective use of eye-gaze, as a clue for attention direction, will enhance the learning from demonstration. For this purpose, we propose a selective eye-gaze augmentation (SEA) network that learns when to use the eye-gaze information. The proposed network architecture consists of three sub-networks: gaze prediction, gating, and action prediction network. Using the prior 4 game frames, a gaze map is predicted by the gaze prediction network which is used for augmenting the input frame. The gating network will determine whether the predicted gaze map should be used in learning and is fed to the final network to predict the action at the current frame. To validate this approach, we use publicly available Atari Human Eye-Tracking And Demonstration (Atari-HEAD) dataset consists of 20 Atari games with 28 million human demonstrations and 328 million eye-gazes (over game frames) collected from four subjects. We demonstrate the efficacy of selective eye-gaze augmentation in comparison with state of the art Attention Guided Imitation Learning (AGIL), Behavior Cloning (BC). The results indicate that the selective augmentation approach (the SEA network) performs significantly better than the AGIL and BC. Moreover, to demonstrate the significance of selective use of gaze through the gating network, we compare our approach with the random selection of the gaze. Even in this case, the SEA network performs significantly better validating the advantage of selectively using the gaze in demonstration learning.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.