To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
Figure 1: Volume renderings of a 64 3 synthetic volume with four different curvature measures. Left to right: first principal curvature κ 1 , second principal curvature κ 2 , mean curvature (κ 1 + κ 2 )/2, and Gaussian curvature κ 1 κ 2 . Magenta indicates negative curvature, green indicates positive. Iso-curvature contours are in black, except for zero curvature in blue. AbstractDirect volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty.
Circuitry mapping of metazoan neural systems is difficult because canonical neural regions (regions containing one or more copies of all components) are large, regional borders are uncertain, neuronal diversity is high, and potential network topologies so numerous that only anatomical ground truth can resolve them. Complete mapping of a specific network requires synaptic resolution, canonical region coverage, and robust neuronal classification. Though transmission electron microscopy (TEM) remains the optimal tool for network mapping, the process of building large serial section TEM (ssTEM) image volumes is rendered difficult by the need to precisely mosaic distorted image tiles and register distorted mosaics. Moreover, most molecular neuronal class markers are poorly compatible with optimal TEM imaging. Our objective was to build a complete framework for ultrastructural circuitry mapping. This framework combines strong TEM-compliant small molecule profiling with automated image tile mosaicking, automated slice-to-slice image registration, and gigabyte-scale image browsing for volume annotation. Specifically we show how ultrathin molecular profiling datasets and their resultant classification maps can be embedded into ssTEM datasets and how scripted acquisition tools (SerialEM), mosaicking and registration (ir-tools), and large slice viewers (MosaicBuilder, Viking) can be used to manage terabyte-scale volumes. These methods enable large-scale connectivity analyses of new and legacy data. In well-posed tasks (e.g., complete network mapping in retina), terabyte-scale image volumes that previously would require decades of assembly can now be completed in months. Perhaps more importantly, the fusion of molecular profiling, image acquisition by SerialEM, ir-tools volume assembly, and data viewers/annotators also allow ssTEM to be used as a prospective tool for discovery in nonneural systems and a practical screening methodology for neurogenetics. Finally, this framework provides a mechanism for parallelization of ssTEM imaging, volume assembly, and data analysis across an international user base, enhancing the productivity of a large cohort of electron microscopists.
Due to our familiarity with how fluids move and interact, as well as their complexity, plausible animation of fluidsremains a challenging problem. We present a particle interaction method for simulating fluids. The underlyingequations of fluid motion are discretized using moving particles and their interactions. The method allows simulationand modeling of mixing fluids with different physical properties, fluid interactions with stationary objects, andfluids that exhibit significant interface breakup and fragmentation. The gridless computational method is suitedfor medium scale problems since computational elements exist only where needed. The method fits well into thecurrent user interaction paradigm and allows easy user control over the desired fluid motion.
Dynamic contrast-enhanced (DCE) MRI is a powerful technique to probe an area of interest in the body. Here a temporally constrained reconstruction (TCR) technique that requires less k-space data over time to obtain good-quality reconstructed images is proposed. This approach can be used to improve the spatial or temporal resolution, or increase the coverage of the object of interest. The method jointly reconstructs the space-time data iteratively with a temporal constraint in order to resolve aliasing. The method was implemented and its feasibility tested on DCE myocardial perfusion data with little or no motion. The results obtained from sparse k-space data using the TCR method were compared with results obtained with a sliding-window (SW) method and from full data using the standard inverse Fourier transform (IFT) reconstruction. Acceleration factors of 5 (R ؍ 5) were achieved without a significant loss in image quality. Mean improvements of 28 ؎ 4% in the signal-to-noise ratio (SNR) and 14 ؎ 4% in the contrast-to-noise ratio (CNR) were observed in the images reconstructed using the TCR method on sparse data (R ؍ 5) compared to the standard IFT reconstructions from full data for the perfusion datasets. Dynamic contrast-enhanced (DCE) MRI is used to track changes over time in an object of interest by acquiring a series of images. A contrast agent is injected and the data are acquired in k-space for each time frame. Rapid acquisitions are required to track the quickly changing contrast in the object. One application of DCE-MRI is myocardial perfusion, which is an important tool for assessing coronary artery disease. In DCE-MRI for myocardial perfusion, contrast agents such as gadolinium (Gd)-DTPA are injected and images are acquired using ECG-gated sequences to track the uptake of the contrast agent by the myocardium at high temporal resolution.To reduce the data acquisition time of dynamic MRI, a number of techniques have been developed. These methods acquire a fraction of k-space in each time frame and reconstruct images based on a priori information about the dynamic data. Methods such as keyhole imaging (1,2) and reduced-encoding MR imaging with generalized-series reconstruction (RIGR) (3Ϫ5) assume that in a dynamic sequence only the low-frequency data change and the high-frequency data remain static. Thus full data can be acquired for a single frame in the sequence and only lowfrequency data need to be acquired for the remaining frames. This assumption of static high frequencies is not always accurate.View-sharing-type methods (6 -9) assume that the dynamics in an image sequence change only by a small amount from frame to frame. Thus only a fraction of data can be acquired for each frame and the missing data can be obtained from the adjacent frames. Such data-sharing is equivalent to linear interpolation in time and can reduce temporal resolution.More recently, Madore et al. (10) proposed the unaliasing by Fourier-encoding the overlaps using the temporal dimension (UNFOLD) method for cardiac cine imagin...
In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutuallyexclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data.
We present an in-depth analysis of a variation of the nonlocal means (NLM) image denoising algorithm that uses principal component analysis (PCA) to achieve a higher accuracy while reducing computational load. Image neighborhood vectors are first projected onto a lower dimensional subspace using PCA. The dimensionality of this subspace is chosen automatically using parallel analysis. Consequently, neighborhood similarity weights for denoising are computed using distances in this subspace rather than the full space. The resulting algorithm is referred to as principal neighborhood dictionary (PND) nonlocal means. We investigate PND's accuracy as a function of the dimensionality of the projection subspace and demonstrate that denoising accuracy peaks at a relatively low number of dimensions. The accuracy of NLM and PND are also examined with respect to the choice of image neighborhood and search window sizes. Finally, we present a quantitative and qualitative comparison of PND versus NLM and another image neighborhood PCA-based state-of-the-art image denoising algorithm.
This paper describes a method for building efficient representations of large sets of brain images. Our hypothesis is that the space spanned by a set of brain images can be captured, to a close approximation, by a low-dimensional, nonlinear manifold. This paper presents a method to learn such a low-dimensional manifold from a given data set. The manifold model is generative-brain images can be constructed from a relatively small set of parameters, and new brain images can be projected onto the manifold. This allows to quantify the geometric accuracy of the manifold approximation in terms of projection distance. The manifold coordinates induce a Euclidean coordinate system on the population data that can be used to perform statistical analysis of the population. We evaluate the proposed method on the OASIS and ADNI brain databases of head MR images in two ways. First, the geometric fit of the method is qualitatively and quantitatively evaluated. Second, the ability of the brain manifold model to explain clinical measures is analyzed by linear regression in the manifold coordinate space. The regression models show that the manifold model is a statistically significant descriptor of clinical parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.