The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new <monospace>DeepLabCut-Live!</monospace> package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called <monospace>DLC-Live! GUI</monospace>), and integration into (2) <monospace>Bonsai,</monospace> and (3) <monospace>AutoPilot</monospace>. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.
The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are (1) noninvasive, (2) low-latency, and (3) provide interfaces to trigger external hardware based on posture (i.e., not just objectbased-tracking). Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. In extending our efforts towards the animal pose estimation toolbox DeepLabCut, here, we provide a new DeepLabCut-Live! package that achieves low-latency real-time pose estimation (within 15 ms, at >100 FPS), with an additional forwardprediction module that achieves zero-latency feedback. We also provide three options for using this tool with ease: a stand-alone GUI (called DLC-Live!GUI), integration into Bonsai and into AutoPilot. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.HighlightsThe DeepLabCut-Live! package is available via pip install deeplabcut-liveThe Bonsai-DLC plugin is availableThe AutoPilot-DLC plugin is availableThe DeepLabCut-Live! GUI package is available via pip install deeplabcut-live-gui
Neuroscience needs behavior, and behavioral experiments require the coordination of large numbers of heterogeneous hardware components and data streams. Currently available tools strongly limit the complexity and reproducibility of experiments. Here we introduce Autopilot, a complete, open-source Python framework for behavioral neuroscience that distributes experiments over networked swarms of Raspberry Pis. Autopilot enables qualitatively greater experimental flexibility by allowing arbitrary numbers of hardware components to be combined in arbitrary experimental designs. Research is made reproducible by documenting all data and task design parameters in a human-readable and publishable format at the time of collection. Autopilot provides an order-of-magnitude performance improvement over existing tools while also being an order of magnitude less costly to implement. Autopilot's flexible, scalable architecture allows neuroscientists to design the next generation of experiments to investigate the behaving brain.
Speech is perceived as a series of relatively invariant phonemes despite extreme variability in the acoustic signal. To be perceived as nearly-identical phonemes, speech sounds that vary continuously over a range of acoustic parameters must be perceptually discretized by the auditory system. Such many-to-one mappings of undifferentiated sensory information to a finite number of discrete categories are ubiquitous in perception. Although many mechanistic models of phonetic perception have been proposed, they remain largely unconstrained by neurobiological data. Current human neurophysiological methods lack the necessary spatiotemporal resolution to provide it: speech is too fast, and the neural circuitry involved is too small. This study demonstrates that mice are capable of learning generalizable phonetic categories, and can thus serve as a model for phonetic perception. Mice learned to discriminate consonants and generalized consonant identity across novel vowel contexts and speakers, consistent with true category learning. A mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.
We present a fully open ventilator platform--The People's Ventilator: PVP1-- with complete documentation and detailed build instructions, and a DIY cost of \$1,300 USD. Here, we validate PVP1 against key performance criteria specified in the U.S. Food and Drug Administration's Emergency Use Authorization for Ventilators. Notably, PVP1 performs well over a wide range of test conditions and has been demonstrated to perform stably for a minimum of 72,000 breath cycles over three days with a mechanical test lung. As an open project, PVP1 can enable both future educational, academic, and clinical developments in the ventilator space.
Mechanical ventilators are safety-critical devices that help patients breathe, commonly found in hospital intensive care units (ICUs)—yet, the high costs and proprietary nature of commercial ventilators inhibit their use as an educational and research platform. We present a fully open ventilator device—The People’s Ventilator: PVP1—with complete hardware and software documentation including detailed build instructions and a DIY cost of $1,700 USD. We validate PVP1 against both key performance criteria specified in the U.S. Food and Drug Administration’s Emergency Use Authorization for Ventilators, and in a pediatric context against a state-of-the-art commercial ventilator. Notably, PVP1 performs well over a wide range of test conditions and performance stability is demonstrated for a minimum of 75,000 breath cycles over three days with an adult mechanical test lung. As an open project, PVP1 can enable future educational, academic, and clinical developments in the ventilator space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.