Summary Neuronal representations change as associations are learned between sensory stimuli and behavioral actions. However, it is poorly understood whether representations for learned associations stabilize in cortical association areas or continue to change following learning. We tracked the activity of posterior parietal cortex neurons for a month as mice stably performed a virtual-navigation task. The relationship between cells’ activity and task features was mostly stable on single days but underwent major reorganization over weeks. The neurons informative about task features (trial type and maze locations) changed across days. Despite changes in individual cells, the population activity had statistically similar properties each day and stable information for over a week. As mice learned additional associations, new activity patterns emerged in the neurons used for existing representations without greatly affecting the rate of change of these representations. We propose that dynamic neuronal activity patterns could balance plasticity for learning and stability for memory.
Highlights d Activity was densely sampled across posterior mouse cortex during a navigation task d Encoding was distributed and varied gradually across higher visual, parietal areas d Areas were discriminable based on encoding profiles, not compartmentalized encoding d Multimodal representations emerged where single-feature representations overlapped
No abstract
Calcium imaging is a key method in neuroscience for investigating patterns of neuronal activity in vivo . Still, existing algorithms to detect and extract activity signals from calcium-imaging movies have major shortcomings. We introduce the HNCcorr algorithm for cell identification in calcium-imaging datasets that addresses these shortcomings. HNCcorr relies on the combinatorial clustering problem HNC (Hochbaum’s Normalized Cut), which is similar to the Normalized Cut problem of Shi and Malik, a well known problem in image segmentation. HNC identifies cells as coherent clusters of pixels that are highly distinct from the remaining pixels. HNCcorr guarantees a globally optimal solution to the underlying optimization problem as well as minimal dependence on initialization techniques. HNCcorr also uses a new method, called “similarity squared”, for measuring similarity between pixels in calcium-imaging movies. The effectiveness of HNCcorr is demonstrated by its top performance on the Neurofinder cell identification benchmark. We believe HNCcorr is an important addition to the toolbox for analysis of calcium-imaging movies.
Open-vocabulary object detection has benefited greatly from pretrained visionlanguage models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to imagelevel pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudoannotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (≈10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling.Preprint. Under review.
Non-technical summary Optical imaging is widely used to map functional areas of the cerebral cortex. We present a method for fast fluorescence imaging of map-level cortical activity using a calcium indicator protein. Sensory-evoked neuronal activity can be imaged repeatedly in the same mouse over weeks, enabling new opportunities for the longitudinal study of cortical function and dysfunction. We hope this method will be flexibly applied across different cortical areas and to a variety of newly developed genetically encoded calcium and voltage sensors.Abstract In vivo optical imaging can reveal the dynamics of large-scale cortical activity, but methods for chronic recording are limited. Here we present a technique for long-term investigation of cortical map dynamics using wide-field ratiometric fluorescence imaging of the genetically encoded calcium indicator (GECI) Yellow Cameleon 3.60. We find that wide-field GECI signals report sensory-evoked activity in anaesthetized mouse somatosensory cortex with high sensitivity and spatiotemporal precision, and furthermore, can be measured repeatedly in separate imaging sessions over multiple weeks. This method opens new possibilities for the longitudinal study of stability and plasticity of cortical sensory representations.
Extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning. To address this challenge, we adopt a keypoint-based image representation and learn a stochastic dynamics model of the keypoints. Future frames are reconstructed from the keypoints and a reference frame. By modeling dynamics in the keypoint coordinate space, we achieve stable learning and avoid compounding of errors in pixel space. Our method improves upon unstructured representations both for pixel-level video prediction and for downstream tasks requiring object-level understanding of motion dynamics. We evaluate our model on diverse datasets: a multi-agent sports dataset, the Human3.6M dataset, and datasets based on continuous control tasks from the DeepMind Control Suite. The spatially structured representation outperforms unstructured representations on a range of motion-related tasks such as object tracking, action recognition and reward prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.