A theoretical framework for understanding movement preparation is proposed. Movement parameters are represented by activation fields, distributions of activation defined over metric spaces. The fields evolve under the influence of various sources of localized input, representing information about upcoming movements. Localized patterns of activation self-stabilize through cooperative and competitive interactions within the fields. The task environment is represented by a 2nd class of fields, which preshape the movement parameter representation. The model accounts for a sizable body of empirical findings on movement initiation (continuous and graded nature of movement preparation, dependence on the metrics of the task, stimulus uncertainty effect, stimulus-response compatibility effects, Simon effect, precuing paradigm, and others) and suggests new ways of exploring the structure of motor representations.
Psychophysical evidence in humans indicates that localization is different for stationary flashed and coherently moving objects. To address how the primary visual cortex represents object position we used a population approach that pools spiking activity of many neurones in cat area 17. In response to flashed stationary squares (0.4 deg) we obtained localized activity distributions in visual field coordinates, which we referred to as profiles across a 'population receptive field' (PRF). We here show how motion trajectories can be derived from activity across the PRF and how the representation of moving and flashed stimuli differs in position. We found that motion was represented by peaks of population activity that followed the stimulus with a speed-dependent lag. However, time-to-peak latencies were shorter by ∼16 ms compared to the population responses to stationary flashes. In addition, motion representation showed a directional bias, as latencies were more reduced for peripheral-to-central motion compared to the opposite direction. We suggest that a moving stimulus provides 'preactivation' that allows more rapid processing than for a single flash event.
In the last decades, researchers have proposed a large number of theoretical models of timing. These models make different assumptions concerning how animals learn to time events and how such learning is represented in memory. However, few studies have examined these different assumptions either empirically or conceptually. For knowledge to accumulate, variation in theoretical models must be accompanied by selection of models and model ideas. To that end, we review two timing models, Scalar Expectancy Theory (SET), the dominant model in the field, and the Learning-to-Time (LeT) model, one of the few models dealing explicitly with learning. In the first part of this article, we describe how each model works in prototypical concurrent and retrospective timing tasks, identify their structural similarities, and classify their differences concerning temporal learning and memory. In the second part, we review a series of studies that examined these differences and conclude that both the memory structure postulated by SET and the state dynamics postulated by LeT are probably incorrect. In the third part, we propose a hybrid model that may improve on its parents. The hybrid model accounts for the typical findings in fixed-interval schedules, the peak procedure, mixed fixed interval schedules, simple and double temporal bisection, and temporal generalization tasks. In the fourth and last part, we identify seven challenges that any timing model must meet.
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.