Brain functional connectivity (FC) extracted from resting-state fMRI (RS-fMRI) has become a popular approach for diagnosing various neurodegenerative diseases, including Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Current studies mainly construct the FC networks between grey matter (GM) regions of the brain based on temporal co-variations of the blood oxygenation level-dependent (BOLD) signals, which reflects the synchronized neural activities. However, it was rarely investigated whether the FC detected within the white matter (WM) could provide useful information for diagnosis. Motivated by the recently proposed functional correlation tensors (FCT) computed from RS-fMRI and used to characterize the structured pattern of local FC in the WM, we propose in this paper a novel MCI classification method based on the information conveyed by both the FC between the GM regions and that within the WM regions. Specifically, in the WM, the tensor-based metrics (e.g., fractional anisotropy [FA], similar to the metric calculated based on diffusion tensor imaging [DTI]) are first calculated based on the FCT and then summarized along each of the major WM fiber tracts connecting each pair of the brain GM regions. This could capture the functional information in the WM, in a similar network structure as the FC network constructed for the GM, based only on the same RS-fMRI data. Moreover, a sliding window approach is further used to partition the voxel-wise BOLD signal into multiple short overlapping segments. Then, both the FC and FCT between each pair of the brain regions can be calculated based on the BOLD signal segments in the GM and WM, respectively. In such a way, our method can generate dynamic FC and dynamic FCT to better capture functional information in both GM and WM and further integrate them together by using our developed feature extraction, selection, and ensemble learning algorithms. The experimental results verify that the dynamic FCT can provide valuable functional information in the WM; by combining it with the dynamic FC in the GM, the diagnosis accuracy for MCI subjects can be significantly improved even using RS-fMRI data alone.
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
Purpose: Automatic brain image labeling is highly demanded in the field of medical image analysis. Multiatlas-based approaches are widely used due to their simplicity and robustness in applications. Also, random forest technique is recognized as an efficient method for labeling, although there are several existing limitations. In this paper, the authors intend to address those limitations by proposing a novel framework based on the hierarchical learning of atlas forests. Methods: Their proposed framework aims to train a hierarchy of forests to better correlate voxels in the MR images with their corresponding labels. There are two specific novel strategies for improving brain image labeling. First, different from the conventional ways of using a single level of random forests for brain labeling, the authors design a hierarchical structure to incorporate multiple levels of forests. In particular, each atlas forest in the bottom level is trained in accordance with each individual atlas, and then the bottom-level forests are clustered based on their capabilities in labeling. For each clustered group, the authors retrain a new representative forest in the higher level by using all atlases associated with the lower-level atlas forests in the current group, as well as the tentative label maps yielded from the lower level. This clustering and retraining procedure is conducted iteratively to yield a hierarchical structure of forests. Second, in the testing stage, the authors also present a novel atlas forest selection method to determine an optimal set of atlas forests from the constructed hierarchical structure (by disabling those nonoptimal forests) for accurately labeling the test image. Results: For validating their proposed framework, the authors evaluate it on the public datasets, including Alzheimer's disease neuroimaging initiative, Internet brain segmentation repository, and LONI LPBA40. The authors compare the results with the conventional approaches. The experiments show that the use of the two novel strategies can significantly improve the labeling performance. Note that when more levels are constructed in the hierarchy, the labeling performance can be further improved, but more computational time will be also required. Conclusions: The authors have proposed a novel multiatlas-based framework for automatic and accurate labeling of brain anatomies, which can achieve accurate labeling results for MR brain images. C 2016 American Association of Physicists in Medicine. [http://dx
Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently.
CoronaVac is an inactivated vaccine containing whole-virion SARS-CoV-2, which has been approved in 43 countries for emergency use as of 26 November 2021. However, the long-term immune persistence of the CoronaVac vaccine is still unknown. Here, we reported the status of the persistence of antibodies and cellular responses within 12 months after two doses of CoronaVac. Such data are crucial to inform ongoing and future vaccination strategies to combat COVID-19.
Multi-atlas-based methods are commonly used for MR brain image labeling, which alleviates the burdening and time-consuming task of manual labeling in neuroimaging analysis studies. Traditionally, multi-atlas-based methods first register multiple atlases to the target image, and then propagate the labels from the labeled atlases to the unlabeled target image. However, the registration step involves non-rigid alignment, which is often time-consuming and might lack high accuracy. Alternatively, patch-based methods have shown promise in relaxing the demand for accurate registration, but they often require the use of hand-crafted features. Recently, deep learning techniques have demonstrated their effectiveness in image labeling, by automatically learning comprehensive appearance features from training images. In this paper, we propose a multi-atlas guided fully convolutional network (MA-FCN) for automatic image labeling, which aims at further improving the labeling performance with the aid of prior knowledge from the training atlases. Specifically, we train our MA-FCN model in a patch-based manner, where the input data consists of not only a training image patch but also a set of its neighboring (i.e., most
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.