Background: Primary progressive aphasia (PPA) is a clinical syndrome characterized by the neurodegeneration of language brain systems. Three main clinical forms (non-fluent, semantic, and logopenic PPA) have been recognized, but applicability of the classification and the capacity to predict the underlying pathology is controversial. We aimed to study FDG-PET imaging data in a large consecutive case series of patients with PPA to cluster them into different subtypes according to regional brain metabolism.Methods: 122 FDG-PET imaging studies belonging to 91 PPA patients and 28 healthy controls were included. We developed a hierarchical agglomerative cluster analysis with Ward's linkage method, an unsupervised clustering algorithm. We conducted voxel-based brain mapping analysis to evaluate the patterns of hypometabolism of each identified cluster.Results: Cluster analysis confirmed the three current PPA variants, but the optimal number of clusters according to Davies-Bouldin index was 6 subtypes of PPA. This classification resulted from splitting non-fluent variant into three subtypes, while logopenic PPA was split into two subtypes. Voxel-brain mapping analysis displayed different patterns of hypometabolism for each PPA group. New subtypes also showed a different clinical course and were predictive of amyloid imaging results.Conclusion: Our study found that there are more than the three already recognized subtypes of PPA. These new subtypes were more predictive of clinical course and showed different neuroimaging patterns. Our results support the usefulness of FDG-PET in evaluating PPA, and the applicability of computational methods in the analysis of brain metabolism for improving the classification of neurodegenerative disorders.
Background The analysis of health and medical data is crucial for improving the diagnosis precision, treatments and prevention. In this field, machine learning techniques play a key role. However, the amount of health data acquired from digital machines has high dimensionality and not all data acquired from digital machines are relevant for a particular disease. Primary Progressive Aphasia (PPA) is a neurodegenerative syndrome including several specific diseases, and it is a good model to implement machine learning analyses. In this work, we applied five feature selection algorithms to identify the set of relevant features from 18F-fluorodeoxyglucose positron emission tomography images of the main areas affected by PPA from patient records. On the other hand, we carried out classification and clustering algorithms before and after the feature selection process to contrast both results with those obtained in a previous work. We aimed to find the best classifier and the more relevant features from the WEKA tool to propose further a framework for automatic help on diagnosis. Dataset contains data from 150 FDG-PET imaging studies of 91 patients with a clinic prognosis of PPA, which were examined twice, and 28 controls. Our method comprises six different stages: (i) feature extraction, (ii) expertise knowledge supervision (iii) classification process, (iv) comparing classification results for feature selection, (v) clustering process after feature selection, and (vi) comparing clustering results with those obtained in a previous work. Results Experimental tests confirmed clustering results from a previous work. Although classification results for some algorithms are not decisive for reducing features precisely, Principal Components Analisys (PCA) results exhibited similar or even better performances when compared to those obtained with all features. Conclusions Although reducing the dimensionality does not means a general improvement, the set of features is almost halved and results are better or quite similar. Finally, it is interesting how these results expose a finer grain classification of patients according to the neuroanatomy of their disease.
Background Neuropsychological assessment is considered a valid tool in the diagnosis of neurodegenerative disorders. However, there is an important overlap in cognitive profiles between Alzheimer's disease (AD) and behavioural variant frontotemporal dementia (bvFTD), and the usefulness in diagnosis is uncertain. We aimed to develop machine learning‐based models for the diagnosis using cognitive tests. Methods Three hundred and twenty‐nine participants (170 AD, 72 bvFTD, 87 healthy control [HC]) were enrolled. Evolutionary algorithms, inspired by the process of natural selection, were applied for both mono‐objective and multi‐objective classification and feature selection. Classical algorithms (NativeBayes, Support Vector Machines, among others) were also used, and a meta‐model strategy. Results Accuracies for the diagnosis of AD, bvFTD and the differential diagnosis between them were higher than 84%. Algorithms were able to significantly reduce the number of tests and scores needed. Free and Cued Selective Reminding Test, verbal fluency and Addenbrooke's Cognitive Examination were amongst the most meaningful tests. Conclusions Our study found high levels of accuracy for diagnosis using exclusively neuropsychological tests, which supports the usefulness of cognitive assessment in diagnosis. Machine learning may have a role in improving the interpretation and test selection.
Artificial Intelligence aids early diagnosis and development of new treatments, which is key to slow down the progress of the diseases, which to date have no cure. The patients’ evaluation is carried out through diagnostic techniques such as clinical assessments neuroimaging techniques, which provide high-dimensionality data. In this work, a computational tool is presented that deals with the data provided by the clinical diagnostic techniques. This is a Python-based framework implemented with a modular design and fully extendable. It integrates (i) data processing and management of missing values and outliers; (ii) implementation of an evolutionary feature engineering approach, developed as a Python package, called PyWinEA using Mono-objective and Multi-objetive Genetic Algorithms (NSGAII); (iii) a module for designing predictive models based on a wide range of machine learning algorithms; (iv) a multiclass decision stage based on evolutionary grammars and Bayesian networks. Developed under the eXplainable Artificial Intelligence and open science perspective, this framework provides promising advances and opens the door to the understanding of neurodegenerative diseases from a data-centric point of view. In this work, we have successfully evaluated the potential of the framework for early and automated diagnosis with neuroimages and neurocognitive assessments from patients with Alzheimer’s disease (AD) and frontotemporal dementia (FTD). Graphical abstract
Genetic algorithms have a proven capability to explore a large space of solutions, and deal with very large numbers of input features. We hypothesized that the application of these algorithms to 18F-Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) may help in diagnosis of Alzheimer’s disease (AD) and Frontotemporal Dementia (FTD) by selecting the most meaningful features and automating diagnosis. We aimed to develop algorithms for the three main issues in the diagnosis: discrimination between patients with AD or FTD and healthy controls (HC), differential diagnosis between behavioral FTD (bvFTD) and AD, and differential diagnosis between primary progressive aphasia (PPA) variants. Genetic algorithms, customized with K-Nearest Neighbor and BayesNet Naives as the fitness function, were developed and compared with Principal Component Analysis (PCA). K-fold cross validation within the same sample and external validation with ADNI-3 samples were performed. External validation was performed for the algorithms distinguishing AD and HC. Our study supports the use of FDG-PET imaging, which allowed a very high accuracy rate for the diagnosis of AD, FTD, and related disorders. Genetic algorithms identified the most meaningful features with the minimum set of features, which may be relevant for automated assessment of brain FDG-PET images. Overall, our study contributes to the development of an automated, and optimized diagnosis of neurodegenerative disorders using brain metabolism.
Evolutionary Algorithms (EAs) are routinely applied to solve a large set of optimization problems. Traditionally, their performance in solving those problems is analyzed using the fitness quality and computing time, and the effect of evolutionary operators on both metrics is routinely used to compare different versions of EAs. Nevertheless, scientists face nowadays the challenge of considering the energy efficiency in addition to computational time, which requires studying the energy consumption of algorithms. This paper discusses the interest of introducing power consumption as a new metric to analyze the performance of standard genetic programming (GP). Two well-studied benchmark problems are addressed on three different computing platforms, and two different approaches to measure the power consumption have been tested. Analyzing the population size, the results demonstrates its influence on the energy consumed: a non-linear relationship was found between size and energy required to complete an experiment. This analysis was extended to the cache memory and results show an exponential growth in the number of cache misses as the population size increases, which affects the energy consumed. This study shows that not only computing time or solution quality must be analyzed, but also the energy required to find a solution. Summarizing, this paper shows that when GP is applied, specific considerations on how to select parameter values must be taken into account if the goal is to obtain solutions while searching for energy efficiency. Although the study has been performed using GP, we foresee that it could be similarly extended to EAs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.