Optogenetics allows the manipulation of neural activity in freely moving animals with millisecond precision, but its application in Drosophila has been limited. Here we show that a recently described Red activatable Channelrhodopsin (ReaChR) permits control of complex behavior in freely moving adult flies, at wavelengths that are not thought to interfere with normal visual function. This tool affords the opportunity to control neural activity over a broad dynamic range of stimulation intensities. Using time-resolved activation, we show that the neural control of male courtship song can be separated into probabilistic, persistent and deterministic, command-like components. The former, but not the latter, neurons are subject to functional modulation by social experience, supporting the idea that they constitute a locus of state-dependent influence. This separation is not evident using thermogenetic tools, underscoring the importance of temporally precise control of neuronal activation in the functional dissection of neural circuits in Drosophila.
Noninvasive behavioral tracking of animals is crucial for many scientific investigations. Recent transfer learning approaches for behavioral tracking have considerably advanced the state of the art. Typically these methods treat each video frame and each object to be tracked independently. In this work, we improve on these methods (particularly in the regime of few training labels) by leveraging the rich spatiotemporal structures pervasive in behavioral video --- specifically, the spatial statistics imposed by physical constraints (e.g., paw to elbow distance), and the temporal statistics imposed by smoothness from frame to frame. We propose a probabilistic graphical model built on top of deep neural networks, Deep Graph Pose (DGP), to leverage these useful spatial and temporal constraints, and develop an efficient structured variational approach to perform inference in this model. The resulting semi-supervised model exploits both labeled and unlabeled frames to achieve significantly more accurate and robust tracking while requiring users to label fewer training frames. In turn, these tracking improvements enhance performance on downstream applications, including robust unsupervised segmentation of behavioral "syllables," and estimation of interpretable "disentangled" low-dimensional representations of the full behavioral video.
What are the spatial and temporal scales of brain-wide neuronal activity, and how do activities at different scales interact? We used SCAPE microscopy to image a large fraction of the central brain of adult Drosophila melanogaster with high spatiotemporal resolution while flies engaged in a variety of behaviors, including running, grooming and flailing. This revealed neural representations of behavior on multiple spatial and temporal scales. The activity of most neurons across the brain correlated (or, in some cases, anticorrelated) with running and flailing over timescales that ranged from seconds to almost a minute. Grooming elicited a much weaker global response. Although these behaviors accounted for a large fraction of neural activity, residual activity not directly correlated with behavior was high dimensional. Many dimensions of the residual activity reflect the activity of small clusters of spatially organized neurons that may correspond to genetically defined cell types. These clusters participate in the global dynamics, indicating that neural activity reflects a combination of local and broadly distributed components. This suggests that microcircuits with highly specified functions are provided with knowledge of the larger context in which they operate, conferring a useful balance of specificity and flexibility.
Chromatin transcriptional states are formed and maintained by the interaction and post-translational modification (PTM) of several chromatin proteins, such as histones and high mobility group (HMG) proteins. Among these, HMGA1a, a small heterochromatin-associated nuclear protein has been shown to be post-translationally modified, and some of these PTMs have been linked to apoptosis and cancer. In cancerous cells, HMGA1a PTMs differ between metastatic and non-metastatic cell lines, suggesting the existence of an HMGA1a PTM code analogous to the “Histone Code.” In this study, we expand on current knowledge by comprehensively characterizing PTMs on HMGA1a purified from human cells using both nanoflow liquid chromatography collision activated dissociation mediated Bottom Up and electron transfer dissociation facilitated Middle and Top Down mass spectrometry (MS). We find HMGA1a to be pervasively modified with many types of modifications such as methylation, acetylation and phosphorylation, including finding novel sites. While Bottom Up MS identified lower level modification sites, Top and Middle Down MS were utilized to identify the most commonly occurring combinatorially modified forms. Remarkably, although we identify several individual modification sites through our Bottom Up and Middle Down MS analyses, we find relatively few combinatorially modified forms dominate the population through Top Down proteomics. The main combinatorial PTMs we find through the Top Down approach are N-terminal acetylation, Arg25 methylation along with phosphorylation of the three most C-terminal serine residues in primarily a diphosphorylated form. This report presents one of the most detailed analyses of HMGA1a to date and illustrates the strength of using a combined MS effort.
A popular approach to quantifying animal behavior from video data is through discrete behavioral segmentation, wherein video frames are labeled as containing one or more behavior classes such as walking or grooming. Sequence models learn to map behavioral features extracted from video frames to discrete behaviors, and both supervised and unsupervised methods are common. However, each approach has its drawbacks: supervised models require a time-consuming annotation step where humans must hand label the desired behaviors; unsupervised models may fail to accurately segment particular behaviors of interest. We introduce a semi-supervised approach that addresses these challenges by constructing a sequence model loss function with (1) a standard supervised loss that classifies a sparse set of hand labels; (2) a weakly supervised loss that classifies a set of easy-to-compute heuristic labels; and (3) a self-supervised loss that predicts the evolution of the behavioral features. With this approach, we show that a large number of unlabeled frames can improve supervised segmentation in the regime of sparse hand labels and also show that a small number of hand labeled frames can increase the precision of unsupervised segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.