During axon pathfinding, growth cones commonly show changes in sensitivity to guidance cues that follow a cell-intrinsic timetable. The cellular timer mechanisms that regulate such changes are, however, poorly understood. Here we have investigated microRNAs (miRNAs) in the timing control of sensitivity to the semaphorin Sema3A in Xenopus laevis retinal ganglion cell (RGC) growth cones. A developmental profiling screen identified miR-124 as a candidate timer. Loss of miR-124 delayed the onset of Sema3A sensitivity and concomitant neuropilin-1 (NRP1) receptor expression and caused cell-autonomous pathfinding errors. CoREST, a cofactor of a NRP1 repressor, was newly identified as a target and mediator of miR-124 for this highly specific temporal aspect of RGC growth cone responsiveness. Our findings indicate that miR-124 is important in regulating the intrinsic temporal changes in RGC growth cone sensitivity and suggest that miRNAs may act broadly as linear timers in vertebrate neuronal development.Axons navigate in a complex and changing environment to establish connections with their targets. Chemotropic cues in this environment attract and repel growing axons 1,2 , and growth cones must modulate their responsiveness en route to avoid stalling at attractive intermediate targets or invading non-targets. Growth cones of commissural neurons in the vertebrate spinal cord, for example, are initially attracted to Netrin-1 and unresponsive to Slits, but, after crossing the midline (an intermediate target), they become unresponsive to Netrin-1 and repelled by Slits 3,4 . Similarly, RGC axons change their responsiveness to several cues as they advance along the pathway 5-7 with growth cones initially showing attraction to Netrin-1 and neutral responses to repellents (Sema3A and Slit2) and later
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. This setting nevertheless integrates a number of the central challenges of artificial intelligence (AI) research: complex visual perception and goal-directed physical control, grounded language comprehension and production, and multi-agent social interaction. To build agents that can robustly interact with humans, we would ideally train them while they interact with humans. However, this is presently impractical. Therefore, we approximate the role of the human with another learned agent, and use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour. Rigorously evaluating our agents poses a great challenge, so we develop a variety of behavioural tests, including evaluation by humans who watch videos of agents or interact directly with them. These evaluations convincingly demonstrate that interactive training and auxiliary losses improve agent behaviour beyond what is achieved by supervised learning of actions alone. Further, we demonstrate that agent capabilities generalise beyond literal experiences in the dataset. Finally, we train evaluation models whose ratings of agents agree well with human judgement, thus permitting the evaluation of new agent models without additional effort. Taken together, our results in this virtual environment provide evidence that large-scale human behavioural imitation is a promising tool to create intelligent, interactive agents, and the challenge of reliably evaluating such agents is possible to surmount. See videos for an overview of the manuscript, training time-lapse, and human-agent interactions.
The dm_control software package is a collection of Python libraries and task suites for reinforcement learning agents in an articulated-body simulation. Infrastructure includes a wrapper for the MuJoCo physics engine and libraries for procedural model manipulation and task authoring. Task suites include the Control Suite, a set of standardized tasks intended to serve as performance benchmarks, a locomotion framework and task families, and a set of manipulation tasks with a robot arm and snap-together bricks. An adjunct tech report and interactive tutorial are also provided.
SummaryUnderstanding how neurons acquire specific response properties is a major goal in neuroscience. Recent studies in mouse neocortex have shown that “sister neurons” derived from the same cortical progenitor cell have a greater probability of forming synaptic connections with one another [1, 2] and are biased to respond to similar sensory stimuli [3, 4]. However, it is unknown whether such lineage-based rules contribute to functional circuit organization across different species and brain regions [5]. To address this question, we examined the influence of lineage on the response properties of neurons within the optic tectum, a visual brain area found in all vertebrates [6]. Tectal neurons possess well-defined spatial receptive fields (RFs) whose center positions are retinotopically organized [7]. If lineage relationships do not influence the functional properties of tectal neurons, one prediction is that the RF positions of sister neurons should be no more (or less) similar to one another than those of neighboring control neurons. To test this prediction, we developed a protocol to unambiguously identify the daughter neurons derived from single tectal progenitor cells in Xenopus laevis tadpoles. We combined this approach with in vivo two-photon calcium imaging in order to characterize the RF properties of tectal neurons. Our data reveal that the RF centers of sister neurons are significantly more similar than would be expected by chance. Ontogenetic relationships therefore influence the fine-scale topography of the retinotectal map, indicating that lineage relationships may represent a general and evolutionarily conserved principle that contributes to the organization of neural circuits.
We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the bodytouch sensors, proprioception and vestibular information-leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.