The study of animal behavior has been revolutionized by sophisticated methodologies that identify and track individuals in video recordings. Video recording of behavior, however, is challenging for many species and habitats including fishes that live in turbid water. Here we present a methodology for identifying and localizing weakly electric fishes on the centimeter scale with subsecond temporal resolution based solely on the electric signals generated by each individual. These signals are recorded with a grid of electrodes and analyzed using a two-part algorithm that identifies the signals from each individual fish and then estimates the position and orientation of each fish using Bayesian inference. Interestingly, because this system involves eavesdropping on electrocommunication signals, it permits monitoring of complex social and physical interactions in the wild. This approach has potential for large-scale non-invasive monitoring of aquatic habitats in the Amazon basin and other tropical freshwater systems.
13Hippocampal place cells are spatially tuned neurons that serve as elements of a 14 "cognitive map" in the mammalian brain 1 . To detect the animal's location, place cells 15 are thought to rely upon two interacting mechanisms: sensing the animal's position 16 relative to familiar landmarks 2,3 and measuring the distance and direction that the 17 animal has travelled from previously occupied locations 4-7 . The latter mechanism, 18 known as path integration, requires a finely tuned gain factor that relates the animal's 19 self-movement to the updating of position on the internal cognitive map, with external 20 landmarks necessary to correct positional error that eventually accumulates 8,9 . Path-21 integration-based models of hippocampal place cells and entorhinal grid cells treat 22the path integration gain as a constant 9-14 , but behavioral evidence in humans 23 suggests that the gain is modifiable 15 . Here we show physiological evidence from 24 hippocampal place cells that the path integration gain is indeed a highly plastic 25 variable that can be altered by persistent conflict between self-motion cues and 26 feedback from external landmarks. In a novel, augmented reality system, visual 27 landmarks were moved in proportion to the animal's movement on a circular track, 28 creating continuous conflict with path integration. Sustained exposure to this cue 29 conflict resulted in predictable and prolonged recalibration of the path integration 30 gain, as estimated from the place cells after the landmarks were extinguished. We 31 propose that this rapid plasticity keeps the positional update in register with the 32 60 Figure 1| Dome apparatus, experimental procedure, and sample data. a, Rendering of 61 dome apparatus. The dome shell is rendered semi-transparent for illustrative purposes. b, 62Photo of the apparatus. The dome is raised in the photo to allow visualization of the interior, 63but it is lowered as in (a) for the experiment. c, Illustration of experimental gain G. From the 64 same initial positions of the landmarks and rat, three different gain conditions are shown, in 65 both lab (top) and landmark (bottom) frames of reference. In each case, the rat runs 90° in 66 the lab frame. d, Profile of gain change and epochs during a typical session. An annular ring 67is always projected at the top of the dome (as shown in (a)) for illumination purposes, and is 68 not turned off even in Epoch 4. e, Representative firing rate maps for five different units from 69 five separate gain manipulation sessions, shown in the lab frame (top, middle rows) and 70 landmark frame (bottom row) during Epoch 3 (when the experimental gain was constant). 71 The plots in the top row are color scaled to their own individual maximum firing rates, 72 whereas the middle and bottom row plots are color scaled to the maximum firing rate of the 73 bottom plot of each pair. The difference in spatially averaged firing rates between landmark 74 and lab frames results from the distributed firing of the cells over the entire track in the l...
Summary Hippocampal place cells are spatially tuned neurons that serve as elements of a “cognitive map” in the mammalian brain 1 . To detect the animal’s location, place cells are thought to rely upon two interacting mechanisms: sensing the animal’s position relative to familiar landmarks 2 , 3 and measuring the distance and direction that the animal has traveled from previously occupied locations 4 – 7 . The latter mechanism, known as path integration , requires a finely tuned gain factor that relates the animal’s self-movement to the updating of position on the internal cognitive map, with external landmarks necessary to correct positional error that accumulates 8 , 9 . Path-integration-based models of hippocampal place cells and entorhinal grid cells treat the path integration gain as a constant 9 – 14 , but behavioral evidence in humans suggests that the gain is modifiable 15 . Here we show physiological evidence from hippocampal place cells that the path integration gain is indeed a highly plastic variable that can be altered by persistent conflict between self-motion cues and feedback from external landmarks. In a novel, augmented reality system, visual landmarks were moved in proportion to the animal’s movement on a circular track, creating continuous conflict with path integration. Sustained exposure to this cue conflict resulted in predictable and prolonged recalibration of the path integration gain, as estimated from the place cells after the landmarks were extinguished. We propose that this rapid plasticity keeps the positional update in register with the animal’s movement in the external world over behavioral timescales. These results also demonstrate that visual landmarks not only provide a signal to correct cumulative error in the path integration system 4 , 8 , 16 – 19 , but also rapidly fine-tune the integration computation itself.
No abstract
Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360 degrees in yaw, and approximately +/-120 degrees in roll and pitch). Experiments with live animals demonstrate that the system can reliably identifyrat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.