Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems.
Neuronal ensembles are groups of neurons with coordinated activity that could represent sensory, motor, or cognitive states. The study of how neuronal ensembles are built, recalled, and involved in the guiding of complex behaviors has been limited by the lack of experimental and analytical tools to reliably identify and manipulate neurons that have the ability to activate entire ensembles. Such pattern completion neurons have also been proposed as key elements of artificial and biological neural networks. Indeed, the relevance of pattern completion neurons is highlighted by growing evidence that targeting them can activate neuronal ensembles and trigger behavior. As a method to reliably detect pattern completion neurons, we use conditional random fields (CRFs), a type of probabilistic graphical model. We apply CRFs to identify pattern completion neurons in ensembles in experiments usingin vivotwo-photon calcium imaging from primary visual cortex of male mice and confirm the CRFs predictions with two-photon optogenetics. To test the broader applicability of CRFs we also analyze publicly available calcium imaging data (Allen Institute Brain Observatory dataset) and demonstrate that CRFs can reliably identify neurons that predict specific features of visual stimuli. Finally, to explore the scalability of CRFs we apply them toin siliconetwork simulations and show that CRFs-identified pattern completion neurons have increased functional connectivity. These results demonstrate the potential of CRFs to characterize and selectively manipulate neural circuits.SIGNIFICANCE STATEMENTWe describe a graph theory method to identify and optically manipulate neurons with pattern completion capability in mouse cortical circuits. Using calcium imaging and two-photon optogeneticsin vivowe confirm that key neurons identified by this method can recall entire neuronal ensembles. This method could be broadly applied to manipulate neuronal ensemble activity to trigger behavior or for therapeutic applications in brain prostheses.
In the task of activity recognition in videos, computing the video representation often involves pooling feature vectors over spatially local neighborhoods. The pooling is done over the entire video, over coarse spatio-temporal pyramids, or over pre-determined rigid cuboids. Similarly to pooling image features over superpixels in images, it is natural to consider pooling spatio-temporal features over video segments, e.g., supervoxels. However, since the number of segments is variable, this produces a video representation of variable size. We propose Motion Words -a new, fixed size video representation, where we pool features over supervoxels. To segment the video into supervoxels, we explore two recent video segmentation algorithms. The proposed representation enables localization of common regions across videos in both space and time. Importantly, since the video segments are meaningful regions, we can interpret the proposed features and obtain a better understanding of why two videos are similar. Evaluation on classification and retrieval tasks on two datasets further shows that Motion Words achieves stateof-the-art performance.
We consider the problem of quantizing data generated from disparate sources, e.g. subjects performing actions with different styles, movies with particular genre bias, various conditions in which images of objects are taken, etc. These are scenarios where unsupervised clustering produces inadequate codebooks because algorithms like K-means tend to cluster samples based on data biases (e.g. cluster subjects), rather than cluster similar samples across sources (e.g. cluster actions). We propose a new quantization technique, Source Constrained Clustering (SCC), which extends the K-means algorithm by enforcing clusters to group samples from multiple sources. We evaluate the method in the context of activity recognition from videos in an unconstrained environment. Experiments on several tasks and features show that using source information improves classification performance.
Breaking the neural code requires the characterization of physiological and behavioral correlates of neuronal ensemble activity. To understand how the emergent properties of neuronal ensembles allow an internal representation of the external world, it is necessary to generate empirically grounded models that fully capture ensemble dynamics. We used machine learning techniques, often applied in big data pattern recognition, to identify and target cortical ensembles from mouse primary visual cortex in vivo leveraging recent developments in optical techniques that allowed the simultaneous recording and manipulation of neuronal ensembles with single-cell precision. Conditional random fields (CRFs) allowed us not only to identify cortical ensembles representing visual stimuli, but also to individually target neurons that are functionally key for pattern completion. These results represent the proof-of-principle that machine learning techniques could be used to design close-loop behavioral experiments that involve the precise manipulation of functional cortical ensembles.All rights reserved. No reuse allowed without permission.(which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.