Machine learning (ML) integrated with medical imaging has introduced new perspectives in precision diagnostics of high-grade gliomas, through radiomics and radiogenomics. This has raised hopes for characterizing noninvasive and in vivo biomarkers for prediction of patient survival, tumor recurrence, and genomics and therefore encouraging treatments tailored to individualized needs. Characterization of tumor infiltration based on pre-operative multi-parametric magnetic resonance imaging (MP-MRI) scans may allow prediction of the loci of future tumor recurrence and thereby aid in planning the course of treatment for the patients, such as optimizing the extent of resection and the dose and target area of radiation. Imaging signatures of tumor genomics can help in identifying the patients who benefit from certain targeted therapies. Specifying molecular properties of gliomas and prediction of their changes over time and with treatment would allow optimization of treatment. In this article, we provide neuro-oncology, neuropathology, and computational perspectives on the promise of radiomics and radiogenomics for allowing personalized treatments of patients with gliomas and discuss the challenges and limitations of these methods in multi-institutional clinical trials and suggestions to mitigate the issues and the future directions.
Background and Motivation: Diagnosis of Parkinson’s disease (PD) is often based on medical attention and clinical signs. It is subjective and does not have a good prognosis. Artificial Intelligence (AI) has played a promising role in the diagnosis of PD. However, it introduces bias due to lack of sample size, poor validation, clinical evaluation, and lack of big data configuration. The purpose of this study is to compute the risk of bias (RoB) automatically. Method: The PRISMA search strategy was adopted to select the best 39 AI studies out of 85 PD studies closely associated with early diagnosis PD. The studies were used to compute 30 AI attributes (based on 6 AI clusters), using AP(ai)Bias 1.0 (AtheroPointTM, Roseville, CA, USA), and the mean aggregate score was computed. The studies were ranked and two cutoffs (Moderate-Low (ML) and High-Moderate (MH)) were determined to segregate the studies into three bins: low-, moderate-, and high-bias. Result: The ML and HM cutoffs were 3.50 and 2.33, respectively, which constituted 7, 13, and 6 for low-, moderate-, and high-bias studies. The best and worst architectures were “deep learning with sketches as outcomes” and “machine learning with Electroencephalography,” respectively. We recommend (i) the usage of power analysis in big data framework, (ii) that it must undergo scientific validation using unseen AI models, and (iii) that it should be taken towards clinical evaluation for reliability and stability tests. Conclusion: The AI is a vital component for the diagnosis of early PD and the recommendations must be followed to lower the RoB.
Brain tumor classification and segmentation for different weighted MRIs are among the most tedious tasks for many researchers due to the high variability of tumor tissues based on texture, structure, and position. Our study is divided into two stages: supervised machine learning-based tumor classification and image processing-based region of tumor extraction. For this job, seven methods have been used for texture feature generation. We have experimented with various state-of-the-art supervised machine learning classification algorithms such as support vector machines (SVMs), K-nearest neighbors (KNNs), binary decision trees (BDTs), random forest (RF), and ensemble methods. Then considering texture features into account, we have tried for fuzzy C-means (FCM), K-means, and hybrid image segmentation algorithms for our study. The experimental results achieved a classification accuracy of 94.25%, 87.88%, 89.57%, 96.99%, and 97% with SVM, KNN, BDT, RF, and Ensemble methods, respectively, on FLAIR-, T1C-, and T2-weighted MRI, and the hybrid segmentation attaining 90.16% mean dice score for tumor area segmentation against ground-truth images.
Radiogenomics, a combination of “Radiomics” and “Genomics,” using Artificial Intelligence (AI) has recently emerged as the state-of-the-art science in precision medicine, especially in oncology care. Radiogenomics syndicates large-scale quantifiable data extracted from radiological medical images enveloped with personalized genomic phenotypes. It fabricates a prediction model through various AI methods to stratify the risk of patients, monitor therapeutic approaches, and assess clinical outcomes. It has recently shown tremendous achievements in prognosis, treatment planning, survival prediction, heterogeneity analysis, reoccurrence, and progression-free survival for human cancer study. Although AI has shown immense performance in oncology care in various clinical aspects, it has several challenges and limitations. The proposed review provides an overview of radiogenomics with the viewpoints on the role of AI in terms of its promises for computational as well as oncological aspects and offers achievements and opportunities in the era of precision medicine. The review also presents various recommendations to diminish these obstacles.
Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.