The integration of sensorimotor signals to internally estimate self-movement is critical for spatial perception and motor control. However, which neural circuits accurately track body motion and how these circuits control movement remain unknown. We found that a population of Drosophila neurons that were sensitive to visual flow patterns typically generated during locomotion, the horizontal system (HS) cells, encoded unambiguous quantitative information about the fly's walking behavior independently of vision. Angular and translational velocity signals were integrated with a behavioral-state signal and generated direction-selective and speed-sensitive graded changes in the membrane potential of these non-spiking cells. The nonvisual direction selectivity of HS cells cooperated with their visual selectivity only when the visual input matched that expected from the fly's movements, thereby revealing a circuit for internally monitoring voluntary walking. Furthermore, given that HS cells promoted leg-based turning, the activity of these cells could be used to control forward walking.
Videos of animal behavior are used to quantify researcher-defined behaviors-of-interest to study neural function, gene mutations, and pharmacological therapies. Behaviors-of-interest are often scored manually, which is time-consuming, limited to few behaviors, and variable across researchers. We created DeepEthogram: software that uses supervised machine learning to convert raw video pixels into an ethogram, the behaviors-of-interest present in each video frame. DeepEthogram is designed to be general-purpose and applicable across species, behaviors, and video-recording hardware. It uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors. Behaviors are classified with above 90% accuracy on single frames in videos of mice and flies, matching expert-level human performance. DeepEthogram accurately predicts rare behaviors, requires little training data, and generalizes across subjects. A graphical interface allows beginning-to-end analysis without end-user programming. DeepEthogram's rapid, automatic, and reproducible labeling of researcher-defined behaviors-of-interest may accelerate and enhance supervised behavior analysis.
Researchers commonly acquire videos of animal behavior and quantify the prevalence of behaviors of interest to study nervous system function, the effects of gene mutations, and the efficacy of pharmacological therapies. This analysis is typically performed manually and is therefore immensely time consuming, often limited to a small number of behaviors, and variable across researchers. Here, we created DeepEthogram: software that takes raw pixel values of videos as input and uses machine learning to output an ethogram, the set of user-defined behaviors of interest present in each frame of a video. We used convolutional neural network models that compute motion in a video, extract features from motion and single frames, and classify these features into behaviors. These models classified behaviors with greater than 90% accuracy on single frames in videos of flies and mice, matching expert-level human performance. The models accurately predicted even extremely rare behaviors, required little training data, and generalized to new videos and subjects. DeepEthogram runs rapidly on common scientific computer hardware and has a graphical user interface that does not require programming by the end-user. We anticipate DeepEthogram will enable the rapid, automated, and reproducible assignment of behavior labels to every frame of a video, thus accelerating all those studies that quantify behaviors of interest.Code is available at: https://github.com/jbohnslav/deepethogram
Highlights d Exploratory walking in Drosophila is configured to maximize gaze stability d Gaze stability is jeopardized by inevitable rotations induced by postural control d Visual feedback supports gaze stability by tuning down postural reflexes d Self-generated visual motion limits rotations without promoting compensatory turns
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.