Hippocampal volumetry is a critical biomarker of aging and dementia, and it is widely used as a predictor of cognitive performance; however, automated hippocampal segmentation methods are limited because the algorithms are (a) not publicly available, (b) subject to error with significant brain atrophy, cerebrovascular disease and lesions, and/or (c) computationally expensive or require parameter tuning. In this study, we trained a 3D convolutional neural network using 259 bilateral manually delineated segmentations collected from three studies, acquired at multiple sites on different scanners with variable protocols. Our training dataset consisted of elderly cases difficult to segment due to extensive atrophy, vascular disease, and lesions. Our algorithm, (HippMapp3r), was validated against four other publicly available state-of-the-art techniques (HippoDeep, FreeSurfer, SBHV, volBrain, and FIRST). HippMapp3r outperformed the other techniques on all three metrics, generating an average Dice of 0.89 and a correlation coefficient of 0.95. It was two orders of magnitude faster than some of the tested techniques. Further validation was performed on 200 subjects from two other disease populations (frontotemporal dementia and vascular cognitive impairment), highlighting our method's low outlier rate. We finally tested the methods on real and simulated "clinical adversarial" cases to study their robustness to corrupt, low-quality scans. The pipeline and models are available at: https://hippmapp3r.readthedocs.ioto facilitate the study of the hippocampus in large multisite studies.
In this study, the DCS was a valid and reliable measure for evaluating catastrophic thinking in patients with dizziness, which was independently associated with dizziness-related disability. Future studies should investigate the influence of alleviating symptoms of catastrophizing on functional outcomes in patients with dizziness or imbalance, the results of which will help guide novel approaches to the clinical care of patients with chronic dizziness.
White matter hyperintensities (WMHs) are frequently observed on structural neuroimaging of elderly populations and are associated with cognitive decline and Sandra E. Black and Maged Goubran are co-senior authors.
The processing of brain diffusion tensor imaging (DTI) data for large cohort studies requires fully automatic pipelines to perform quality control (QC) and artifact/outlier removal procedures on the raw DTI data prior to calculation of diffusion parameters. In this study, three automatic DTI processing pipelines, each complying with the general ENIGMA framework, were designed by uniquely combining multiple image processing software tools. Different QC procedures based on the RESTORE algorithm, the DTIPrep protocol, and a combination of both methods were compared using simulated ground truth and artifact containing DTI datasets modeling eddy current induced distortions, various levels of motion artifacts, and thermal noise. Variability was also examined in 20 DTI datasets acquired in subjects with vascular cognitive impairment (VCI) from the multi-site Ontario Neurodegenerative Disease Research Initiative (ONDRI). The mean fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) were calculated in global brain grey matter (GM) and white matter (WM) regions. For the simulated DTI datasets, the measure used to evaluate the performance of the pipelines was the normalized difference between the mean DTI metrics measured in GM and WM regions and the corresponding ground truth DTI value. The performance of the proposed pipelines was very similar, particularly in FA measurements. However, the pipeline based on the RESTORE algorithm was the most accurate when analyzing the artifact containing DTI datasets. The pipeline that combined the DTIPrep protocol and the RESTORE algorithm produced the lowest standard deviation in FA measurements in normal appearing WM across subjects. We concluded that this pipeline was the most robust and is preferred for automated analysis of multisite brain DTI data.
The Ontario Neurodegenerative Research Initiative (ONDRI) is a 3 years multi-site prospective cohort study that has acquired comprehensive multiple assessment platform data, including 3T structural MRI, from neurodegenerative patients with Alzheimer's disease, mild cognitive impairment, Parkinson's disease, amyotrophic lateral sclerosis, frontotemporal dementia, and cerebrovascular disease. This heterogeneous cross-section of patients with complex neurodegenerative and neurovascular pathologies pose significant challenges for standard neuroimaging tools. To effectively quantify regional measures of normal and pathological brain tissue volumes, the ONDRI neuroimaging platform implemented a semi-automated MRI processing pipeline that was able to address many of the challenges resulting from this heterogeneity. The purpose of this paper is to serve as a reference and conceptual overview of the comprehensive neuroimaging pipeline used to generate regional brain tissue volumes and neurovascular marker data that will be made publicly available online.
Large scale research studies combining magnetic resonance imaging data generated at multiple sites on multiple vendor platforms are becoming more commonplace. The Ontario Neurodegenerative Disease Research Initiative (ONDRI -http://ondri.ca/), a project funded by the Ontario Brain Institute (OBI), is a recently established province-wide natural history study, which has recruited more than 500 participants from neurodegenerative disease groups including amyotrophic lateral sclerosis, fronto-temporal dementia, Parkinson's disease, Alzheimer's disease, mild cognitive impairment, and cerebrovascular disease (previously referred to as the vascular cognitive impairment cohort). Because of its multi-site nature, all captured data must be standardized and meet minimum quality standards to reduce variability. The goal of the ONDRI imaging platform is to maximize data quality by implementing vendor-specific harmonized MR imaging protocols (consistent with the Canadian Dementia Imaging Protocol -http://www.cdip-pcid.ca/), monitoring protocol adherence, qualitatively assessing image quality, measuring signal-to-noise and contrast-to-noise, monitoring system stability, and applying corrections based on the analysis of images from two different phantoms regularly acquired at each site. To maximize image quality, this work describes the use of various automatic pipelines and manual assessment steps, integrated within an established informatics and databasing platform, the Stroke Patient Recovery Research Database (SPReD) built on the Extensible Neuroimaging Archive Toolkit (XNAT), and contained within the Brain-CODE (Centre for Ontario Data Exploration) framework. The purpose of the current paper is to describe the steps undertaken by ONDRI to achieve this high standard of data integrity. Data have been successfully collected for the past 4 years with the pipelines and assessments identifying deviations, allowing for timely interventions and assessment of image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.