We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.
SummaryOur tactile perception of external objects depends on skin-object interactions. The mechanics of contact dictates the existence of fundamental spatiotemporal input features—contact initiation and cessation, slip, and rolling contact—that originate from the fact that solid objects do not interpenetrate. However, it is unknown whether these features are represented within the brain. We used a novel haptic interface to deliver such inputs to the glabrous skin of finger/digit pads and recorded from neurons of the cuneate nucleus (the brain’s first level of tactile processing) in the cat. Surprisingly, despite having similar receptive fields and response properties, each cuneate neuron responded to a unique combination of these inputs. Hence, distinct haptic input features are encoded already at subcortical processing stages. This organization maps skin-object interactions into rich representations provided to higher cortical levels and may call for a re-evaluation of our current understanding of the brain’s somatosensory systems.
SummaryHumans, many animals, and certain robotic hands have deformable fingertip pads [1, 2]. Deformable pads have the advantage of conforming to the objects that are being touched, ensuring a stable grasp for a large range of forces and shapes. Pad deformations change with finger displacements during touch. Pushing a finger against an external surface typically provokes an increase of the gross contact area [3], potentially providing a relative motion cue, a situation comparable to looming in vision [4]. The rate of increase of the area of contact also depends on the compliance of the object [5]. Because objects normally do not suddenly change compliance, participants may interpret an artificially induced variation in compliance, which coincides with a change in the gross contact area, as a change in finger displacement, and consequently they may misestimate their finger’s position relative to the touched object. To test this, we asked participants to compare the perceived displacements of their finger while contacting an object varying pseudo-randomly in compliance from trial to trial. Results indicate a bias in the perception of finger displacement induced by the change in compliance, hence in contact area, indicating that participants interpreted the altered cutaneous input as a cue to proprioception. This situation highlights the capacity of the brain to take advantage of knowledge of the mechanical properties of the body and of the external environment.
Hicheur, Halim, Alexander V. Terekhov, and Alain Berthoz. Intersegmental coordination during human locomotion: does planar covariation of elevation angles reflect central constraints? J Neurophysiol 96: 1406 -1419, 2006. First published June 21, 2006 doi:10.1152/jn.00289.2006. To study intersegmental coordination in humans performing different locomotor tasks (backward, normal, fast walking, and running), we analyzed the spatiotemporal patterns of both elevation and joint angles bilaterally in the sagittal plane. In particular, we determined the origins of the planar covariation of foot, shank, and thigh elevation angles. This planar constraint is observable in the three-dimensional space defined by these three angles and corresponds to the plane described by the three time-varying elevation angle variables over each step cycle. Previous studies showed that this relation between elevation angles constrains lower limb coordination in various experimental situations. We demonstrate here that this planar covariation mainly arises from the strong correlation between foot and shank elevation angles, with thigh angle independently contributing to the pattern of intersegmental covariation. We conclude that the planar covariation of elevation angles does not reflect central constraints, as previously suggested. An alternative approach for analyzing the patterns of coordination of both elevation and joint (hip, knee, and ankle) angles is used, based on temporal cross-correlation and phase relationships between pairs of kinematic variables. We describe the changes in the pattern of intersegmental coordination that are associated with the changes of locomotor modes and locomotor speeds. We provide some evidence for a distinct control of thigh motion and discuss the respective contributions of passive mechanical factors and of active (arising from neural control) factors to the formation and the regulation of the locomotor pattern throughout the gait cycle.
A common method to explore the somatosensory function of the brain is to relate skin stimuli to neurophysiological recordings. However, interaction with the skin involves complex mechanical effects. Variability in mechanically induced spike responses is likely to be due in part to mechanical variability of the transformation of stimuli into spiking patterns in the primary sensors located in the skin. This source of variability greatly hampers detailed investigations of the response of the brain to different types of mechanical stimuli. A novel stimulation technique designed to minimize the uncertainty in the strain distributions induced in the skin was applied to evoke responses in single neurons in the cat. We show that exposure to specific spatio-temporal stimuli induced highly reproducible spike responses in the cells of the cuneate nucleus, which represents the first stage of integration of peripheral inputs to the brain. Using precisely controlled spatio-temporal stimuli, we also show that cuneate neurons, as a whole, were selectively sensitive to the spatial and to the temporal aspects of the stimuli. We conclude that the present skin stimulation technique based on localized differential tractions greatly reduces response variability that is exogenous to the information processing of the brain and hence paves the way for substantially more detailed investigations of the brain's somatosensory system.
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot's sensorimotor flow. We show that the notion of space as environmentindependent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent's exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina.
Although deep neural networks (DNNs) have demonstrated impressive results during the last decade, they remain highly specialized tools, which are trained -often from scratch -to solve each particular task. The human brain, in contrast, significantly re-uses existing capacities when learning to solve new tasks. In the current study we explore a block-modular architecture for DNNs, which allows parts of the existing network to be re-used to solve a new task without a decrease in performance when solving the original task. We show that networks with such architectures can outperform networks trained from scratch, or perform comparably, while having to learn nearly 10 times fewer weights than the networks trained from scratch.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.