Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context. Methods: A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections. Results: We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way. Conclusions: Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
Lung cancer accounts for the largest amount of deaths worldwide with respect to the other oncological pathologies. To guarantee the most effective cure to patients for such aggressive tumours, radiomics is increasing as a novel and promising research field that aims at extracting knowledge from data in terms of quantitative measures that are computed from diagnostic images, with prognostic and predictive ends. This knowledge could be used to optimize current treatments and to maximize their efficacy. To this end, we hereby study the use of such quantitative biomarkers computed from CT images of patients affected by Non-Small Cell Lung Cancer to predict Overall Survival. The main contributions of this work are two: first, we consider different volumes of interest for the same patient to find out whether the volume surrounding the visible lesions can provide useful information; second, we introduce 3D Local Binary Patterns, which are texture measures scarcely explored in radiomics. As further validation, we show that the proposed signature outperforms not only the features automatically computed by a deep learning-based approach, but also another signature at the state-of-the-art using other handcrafted features.
The year 2020 was characterized by the COVID-19 pandemic that has caused, by the end of March 2021, more than 2.5 million deaths worldwide. Since the beginning, besides the laboratory test, used as the gold standard, many applications have been applying deep learning algorithms to chest X-ray images to recognize COVID-19 infected patients. In this context, we found out that convolutional neural networks perform well on a single dataset but struggle to generalize to other data sources. To overcome this limitation, we propose a late fusion approach where we combine the outputs of several state-of-the-art CNNs, introducing a novel method that allows us to construct an optimum ensemble determining which and how many base learners should be aggregated. This choice is driven by a two-objective function that maximizes, on a validation set, the accuracy and the diversity of the ensemble itself. A wide set of experiments on several publicly available datasets, accounting for more than 92000 images, shows that the proposed approach provides average recognition rates up to 93.54% when tested on external datasets.
Background: Differentiate malignant from benign enhancing foci on breast magnetic resonance imaging (MRI) through radiomic signature. Methods: Forty-five enhancing foci in 45 patients were included in this retrospective study, with needle biopsy or imaging follow-up serving as a reference standard. There were 12 malignant and 33 benign lesions. Eight benign lesions confirmed by over 5-year negative follow-up and 15 malignant histopathologically confirmed lesions were added to the dataset to provide reference cases to the machine learning analysis. All MRI examinations were performed with a 1.5-T scanner. One three-dimensional T1-weighted unenhanced sequence was acquired, followed by four dynamic sequences after intravenous injection of 0.1 mmol/kg of gadobenate dimeglumine. Enhancing foci were segmented by an expert breast radiologist, over 200 radiomic features were extracted, and an evolutionary machine learning method ("training with input selection and testing") was applied. For each classifier, sensitivity, specificity and accuracy were calculated as point estimates and 95% confidence intervals (CIs). Results: A k-nearest neighbour classifier based on 35 selected features was identified as the best performing machine learning approach. Considering both the 45 enhancing foci and the 23 additional cases, this classifier showed a sensitivity of 27/27 (100%, 95% CI 87-100%), a specificity of 37/41 (90%, 95% CI 77-97%), and an accuracy of 64/68 (94%, 95% CI 86-98%). Conclusion: This preliminary study showed the feasibility of a radiomic approach for the characterisation of enhancing foci on breast MRI.
Vestibular schwannomas, also known as acoustic neuromas, are benign primary intracranial tumor of the myelin-forming cells of the 8th cranial nerve. Stereotactic radiosurgery is one of the available therapies that can effectively control tumor growth, and it can be performed using the CyberKnife robotic device. However, this therapy may have side effects and its efficacy should be assessed up to two years. In this respect, being able to forecast the treatment response using the data collected during the initial and routinely MR images could be a valuable support when planning a personalised therapy. This manuscript therefore introduces a machine learning-based radiomics approach that first computes quantitative biomarkers from MR images and then predicts the treatment response, taking also into consideration the dataset class skewness.
The year 2020 was marked by the worldwide COVID-19 pandemic, which caused over 2.5 million deaths by the end of February 2021. Different methods have been established since the beginning to identify infected patients and restrict the spread of the virus. In addition to laboratory analysis, used as the gold standard, several applications have been developed to apply deep learning algorithms to chest X-ray (CXR) images to diagnose patients affected by COVID-19. The literature shows that convolutional neural networks (CNNs) perform well on a single image dataset, but fail to generalize to other sources of data. To overcome this limitation, we present a late fusion approach in which multiple CNNs collaborate to diagnose the CXR scan of a patient, improving the generalizability. Experiments on three datasets publicly available show that the ensemble of CNNs outperforms stand-alone networks, achieving promising performance not only in cross-validation, but also when external validation is used, with an average accuracy of 95.18%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.