Converging evidence shows that hand-actions are controlled at the level of synergies and not single muscles. One intriguing aspect of synergy-based action-representation is that it may be intrinsically sparse and the same synergies can be shared across several distinct types of hand-actions. Here, adopting a normative angle, we consider three hypotheses for hand-action optimal-control: sparse-combination hypothesis (SC) – sparsity in the mapping between synergies and actions - i.e., actions implemented using a sparse combination of synergies; sparse-elements hypothesis (SE) – sparsity in synergy representation – i.e., the mapping between degrees-of-freedom (DoF) and synergies is sparse; double-sparsity hypothesis (DS) – a novel view combining both SC and SE – i.e., both the mapping between DoF and synergies and between synergies and actions are sparse, each action implementing a sparse combination of synergies (as in SC), each using a limited set of DoFs (as in SE). We evaluate these hypotheses using hand kinematic data from six human subjects performing nine different types of reach-to-grasp actions. Our results support DS, suggesting that the best action representation is based on a relatively large set of synergies, each involving a reduced number of degrees-of-freedom, and that distinct sets of synergies may be involved in distinct tasks.
A challenging problem when studying a dynamical system is to find the interdependencies among its individual components. Several algorithms have been proposed to detect directed dynamical influences between time series. Two of the most used approaches are a model-free one (transfer entropy) and a model-based one (Granger causality). Several pitfalls are related to the presence or absence of assumptions in modeling the relevant features of the data. We tried to overcome those pitfalls using a neural network approach in which a model is built without any a priori assumptions. In this sense this method can be seen as a bridge between model-free and model-based approaches. The experiments performed will show that the method presented in this work can detect the correct dynamical information flows occurring in a system of time series. Additionally we adopt a non-uniform embedding framework according to which only the past states that actually help the prediction are entered into the model, improving the prediction and avoiding the risk of overfitting. This method also leads to a further improvement with respect to traditional Granger causality approaches when redundant variables (i.e. variables sharing the same information about the future of the system) are involved. Neural networks are also able to recognize dynamics in data sets completely different from the ones used during the training phase.
Recent research shows that some brain areas perform more than one task and the switching times between them are incompatible with learning and that parts of the brain are controlled by other parts of the brain, or are “recycled”, or are used and reused for various purposes by other neural circuits in different task categories and cognitive domains. All this is conducive to the notion of “programming in the brain”. In this paper, we describe a programmable neural architecture, biologically plausible on the neural level, and we implement, test, and validate it in order to support the programming interpretation of the above-mentioned phenomenology. A programmable neural network is a fixed-weight network that is endowed with auxiliary or programming inputs and behaves as any of a specified class of neural networks when its programming inputs are fed with a code of the weight matrix of a network of the class. The construction is based on the “pulling out” of the multiplication between synaptic weights and neuron outputs and having it performed in “software” by specialised multiplicative-response fixed subnetworks. Such construction has been tested for robustness with respect to various sources of noise. Theoretical underpinnings, analysis of related research, detailed construction schemes, and extensive testing results are given
The notion of synergy enables one to provide simplified descriptions of hand actions. It has been used in a number of different meanings ranging from kinematic and dynamic synergies to postural and temporal postural synergies. However, relatively little is known about how representing an action by synergies might take into account the possibility to have a hierarchical and multiple action representation. This is a key aspect for action representation as it has been characterized by action theorists and cognitive neuroscientists. Thus, the aim of the present paper is to investigate whether and to what extent a hierarchical and multiple action representation can be obtained by a synergy approach. To this purpose, we took advantage of representing hand action as a linear combination of temporal postural synergies (TPSs), but on the assumption that TPSs have a tree-structured organization. In a tree-structured organization, a hand action representation can involve a TPS only if the ancestors of the synergy in the tree are themselves involved in the action representation. The results showed that this organization is enough to force a multiple representation of hand actions in terms of synergies which are hierarchically organized.
Typical patterns of hand-joint covariation arising in the context of grasping actions enable one to provide simplified descriptions of these actions in terms of small sets of hand-joint parameters. The computational model of mirror mechanisms introduced here hypothesizes that mirror neurons are crucially involved in coding and making this simplified motor information available for both action recognition and control processes. In particular, grasping action recognition processes are modeled in terms of a visuo-motor loop enabling one to make iterated use of mirror-coded motor information. In simulation experiments concerning the classification of reach-to-grasp actions, mirror-coded information was found to simplify the processing of visual inputs and to improve action recognition results with respect to recognition procedures that are solely based on visual processing. The visuo-motor loop involved in action recognition is a distinctive feature of this model which is coherent with the direct matching hypothesis. Moreover, the visuo-motor loop sets the model introduced here apart from those computational models that identify mirror neuron activity in action observation with the final outcome of computational processes unidirectionally flowing from sensory (and usually visual) to motor systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.