Reconstruction of topologically correct and accurate cortical surfaces from infant MR images is of great importance in neuroimaging mapping of early brain development. However, due to rapid growth and ongoing myelination, infant MR images exhibit extremely low tissue contrast and dynamic appearance patterns, thus leading to much more topological errors (holes and handles) in the cortical surfaces derived from tissue segmentation results, in comparison to adult MR images which typically have good tissue contrast. Existing methods for topological correction either rely on the minimal correction criteria, or ad hoc rules based on image intensity priori, thus often resulting in erroneous correction and large anatomical errors in reconstructed infant cortical surfaces. To address these issues, we propose to correct topological errors by learning information from the anatomical references, i.e., manually corrected images. Specifically, in our method, we first locate candidate voxels of topologically defected regions by using a topology-preserving level set method. Then, by leveraging rich information of the corresponding patches from reference images, we build region-specific dictionaries from the anatomical references and infer the correct labels of candidate voxels using sparse representation. Notably, we further integrate these two steps into an iterative framework to enable gradual correction of large topological errors, which are frequently occurred in infant images and cannot be completely corrected using one-shot sparse representation. Extensive experiments on infant cortical surfaces demonstrate that our method not only effectively corrects the topological defects, but also leads to better anatomical consistency, compared to the state-of-the-art methods.
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions and having been widely used in real-world applications, such as text mining, visual classification, and recommender systems. However, most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data. This issue calls for the need of Largescale Machine Learning (LML), which aims to learn patterns from big data with comparable performance efficiently. In this paper, we offer a systematic survey on existing LML methods to provide a blueprint for the future developments of this area. We first divide these LML methods according to the ways of improving the scalability: 1) model simplification on computational complexities, 2) optimization approximation on computational efficiency, and 3) computation parallelism on computational capabilities. Then we categorize the methods in each perspective according to their targeted scenarios and introduce representative methods in line with intrinsic strategies. Lastly, we analyze their limitations and discuss potential directions as well as open issues that are promising to address in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.