To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal "canonical" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.
The classification of sleep stages is the first and an important step in the quantitative analysis of polysomnographic recordings. Sleep stage scoring relies heavily on visual pattern recognition by a human expert and is time consuming and subjective. Thus, there is a need for automatic classification. In this work we developed machine learning algorithms for sleep classification: random forest (RF) classification based on features and artificial neural networks (ANNs) working both with features and raw data. We tested our methods in healthy subjects and in patients. Most algorithms yielded good results comparable to human interrater agreement. Our study revealed that deep neural networks (DNNs) working with raw data performed better than feature-based methods. We also demonstrated that taking the local temporal structure of sleep into account a priori is important. Our results demonstrate the utility of neural network architectures for the classification of sleep.
Abstract. Connectomics based on high resolution ssTEM imagery requires reconstruction of the neuron geometry from histological slides. We present an approach for the automatic membrane segmentation in anisotropic stacks of electron microscopy brain tissue sections. The ambiguities in neuronal segmentation of a section are resolved by using the context from the neighboring sections. We find the global dense correspondence between the sections by SIFT Flow algorithm, evaluate the features of the corresponding pixels and use them to perform the segmentation. Our method is 3.6 and 6.4% more accurate in two different accuracy metrics than the algorithm with no context from other sections.
A variety of different types of instability has been found in the saccadic system of humans. Some of the instabilities correspond to clinical conditions, whereas others are inherent in the normal saccadic system. How can these instabilities arise within the mechanism of normal saccadic eye movements? A physiologically-based model of the saccadic system predicts that horizontal saccadic oscillations will occur with excessive mutual inhibition between the left and right burst cells and with underaction of the pause cells. The amplitudes and frequencies of the oscillations had ranges of 0-6 degrees and 6-20 cycles per second, respectively. Application of stability analysis techniques to the model reveals that development of the oscillations can be explained by the Hopf bifurcation mechanism. Future development of this approach will involve classifying pathological instabilities of the saccadic system according to the bifurcation involved in their generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.