Comparative studies have greatly contributed to our understanding of the organization and function of visual pathways of the brain, including that of humans. This comparative approach is a particularly useful tactic for studying the pulvinar nucleus, an enigmatic structure which comprises the largest territory of the human thalamus. This review focuses on the regions of the mouse pulvinar that receive input from the superior colliculus, and highlights similarities of the tectorecipient pulvinar identified across species. Open questions are discussed, as well as the potential contributions of the mouse model for endeavors to elucidate the function of the pulvinar nucleus.
Highlights d Mice discriminate anteroposterior object locations to % 0.5 mm using a single whisker d Mice locate objects using targeted, noisy exploration that is adaptive to touch d Mice don't use roll angle, precise timing, or distance to locate these objects d Whisking midpoint and the number of touches made best explains localization acuity
During active tactile exploration, the dynamic patterns of touch are transduced to electrical signals and transformed by the brain into a mental representation of the object under investigation. This transformation from sensation to perception is thought to be a major function of the mammalian cortex. In primary somatosensory cortex (S1) of mice, layer 5 (L5) pyramidal neurons are major outputs to downstream areas that influence perception, decision-making, and motor control. We investigated self-motion and touch representations in L5 of S1 with juxtacellular loose-seal patch recordings of optogenetically identified excitatory neurons. We found that during rhythmic whisker movement, 54 of 115 active neurons (47%) represented self-motion. This population was significantly more modulated by whisker angle than by phase. Upon active touch, a distinct pattern of activity was evoked across L5, which represented the whisker angle at the time of touch. Object location was decodable with submillimeter precision from the touch-evoked spike counts of a randomly sampled handful of these neurons. These representations of whisker angle during self-motion and touch were independent, both in the selection of which neurons were active and in the angle-tuning preference of coactive neurons. Thus, the output of S1 transiently shifts from a representation of self-motion to an independent representation of explored object location during active touch.
The rodent vibrissal system remains pivotal in advancing neuroscience research, particularly for studies of cortical plasticity, learning, decision-making, sensory encoding and sensorimotor integration. While this model system provides notable advantages for quantifying active tactile input, it is hindered by the labor-intensive process of curating touch events across millions of video frames. Even with the aid of automated tools like the Janelia Whisker Tracker, millisecond-accurate touch curation often requires >3 hours of manual review / million video frames. We address this limitation by introducing Whisker Automatic Contact Classifier (WhACC), a python package designed to identify touch periods from high-speed videos of head-fixed behaving rodents with human-level performance. For our model design, we train ResNet50V2 on whisker images and extract features. Next, we engineer features to improve performance with an emphasis on temporal consistency. Finally, we select only the most important features and use them to train a LightGBM classifier. Classification accuracy is assessed against three expert human curators on over one million frames. WhACC shows pairwise touch classification agreement on 99.5% of video frames, equal to between-human agreement. Additionally, comparison between an expert curator and WhACC on a holdout dataset comprising nearly four million frames and 16 single-unit electrophysiology recordings shows negligible differences in neural characterization metrics. Finally, we offer an easy way to select and curate a subset of data to adaptively retrain WhACC. Including this retraining step, we reduce human hours required to curate a 100 million frame dataset from ~333 hours to ~6 hours.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.