The discovery of audiovisual mirror neurons in monkeys gave rise to the hypothesis that premotor areas are inherently involved not only when observing actions but also when listening to action-related sound. However, the whole-brain functional formation underlying such "action-listening" is not fully understood. In addition, previous studies in humans have focused mostly on relatively simple and overexperienced everyday actions, such as hand clapping or door knocking. Here we used functional magnetic resonance imaging to ask whether the human action-recognition system responds to sounds found in a more complex sequence of newly acquired actions. To address this, we chose a piece of music as a model set of acoustically presentable actions and trained non-musicians to play it by ear. We then monitored brain activity in subjects while they listened to the newly acquired piece. Although subjects listened to the music without performing any movements, activation was found bilaterally in the frontoparietal motor-related network (including Broca's area, the premotor region, the intraparietal sulcus, and the inferior parietal region), consistent with neural circuits that have been associated with action observations, and may constitute the human mirror neuron system. Presentation of the practiced notes in a different order activated the network to a much lesser degree, whereas listening to an equally familiar but motorically unknown music did not activate this network. These findings support the hypothesis of a "hearing-doing" system that is highly dependent on the individual's motor repertoire, gets established rapidly, and consists of Broca's area as its hub.
A task-dynamic approach to skilled movements of multi-degree-of-freedom effector systems is developed in which task-specific, relatively autonomous action units are specified within a functionally defined dynamical framework. Qualitative distinctions among tasks (e.g., the body maintaining a steady vertical posture or the hand reaching to a single spatial target versus cyclic vertical hopping or repetitive hand motion between two spatial targets) are captured by corresponding distinctions among dynamical topologies (e.g., point attractor versus limit cycle dynamics) defined at an abstract task space (or work space) level of description. The approach provides a unified account for several signature properties of skilled actions: trajectory shaping (e.g., hands move along approximately straight lines during unperturbed reaches) and immediate compensation (e.g., spontaneous adjustments occur over an entire effector system if a given part is disturbed en route to a goal). Both of these properties are viewed as implicit consequences of a task's underlying dynamics and, importantly, do not require explicit trajectory plans or replanning procedures. Two versions of task dynamics are derived (control law, network coupling) as possible methods of control and coordination in artificial (robotic, prosthetic) systems, and the network coupling version is explored as a biologically relevant control scheme.
How do space and time relate m rhythmical tasks that reqmre the hmbs to move singly or together m various modes of coordination ? And what kind of minimal theoretical model could account for the observed data9 Ead~er findings for human cychcal movements were consistent w~th a nonhnear, limit cycle oscdlator model (Kelso, Holt, Rubm, & Kugler, 198 l) although no detailed modehng was performed at that Ume In the present study, lonemauc data were sampled at 200 samples/second, and a detmled analysis of movement amphtude, frequency, peak velooty, and relative phase (for the blmanual modes, m phase and anuphase) was performed As frequency was scaled from l to 6 Hz (m steps of l Hz) using a pacing metronome, amphtude dropped reversely and peak veiooty m-creased WRhm a frequency condmon, the movement's amphtude scaled &rectly with lls peak veloc-Ry These &verse lonematlc behaviors were modeled exphotly m terms oflow-&menslonal (nonhn-ear) dlsslpaUve dynamics, wRh hnear stiffness as the only control parameter Data and model are shown to compare favorably The abstract, dynamical model offers a umfied treatment of a number of fundamental aspects of movement coordination and control How do space and time relate m rhythmical tasks that require the hands to move singly or together in various modes of coordi-nation9 And what kind of minimal theoretical model could account for the observed data? The present article addresses these fundamental questions that are of longstanding interest to experimental psychology and movement science (e g, von Hoist, 1937/1973; Scripture, 1899; Stetson & Bouman, 1935) It is well known, for example, that discrete and repetitive movements of different amplitude vary systematically in movement duration (provided accuracy requirements are held constant, e g, Cralk, 1947a, 1947b) This and related facts were later for-mahzed into F~tts's Law (1954), a relation among movement time, movement amplitude, and target accuracy, whose under-pmnmgs have been extensively studied (and debated upon) quite recently (e g.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors -"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units -gestures -in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.
Language can be viewed as a structuring of cognitive units that can be transmitted among individuals for the purpose of communicating information. Cognitive units stand in specific and systematic relationships with one another, and linguists are interested in the characterization of these units and the nature of these relationships. Both can be examined at various levels of granularity. It has long been observed that languages exhibit distinct patterning of units in syntax and in phonology. This distinction, a universal characteristic of language, is termed duality of patterning (Hockett, 1960). Syntax refers to the structuring of words in sequence via hierarchical organization, where words are meaningful units belonging to an infinitely expandable set. But words also are composed of structured cognitive units. Phonology structures a small, closed set of recombinable, non-meaningful units that compose words (or signs, in the case of signed languages). It is precisely the use of a set of non-meaningful arbitrary discrete units that allows word creation to be productive. 1 In this chapter we outline a proposal that views the evolution of syntax and of phonology as arising from different sources and ultimately converging in a symbiotic relationship. Duality of patterning forms the intellectual basis for this proposal. Grasp and other manual gestures in early hominids are, as Arbib (Chapter 1, this volume) notes, well suited to provide a link from the iconic to the symbolic. Critically, the iconic aspects of manual gestures lend them a meaningful aspect that is critical to evolution of a system of symbolic units. However, we will argue that, given duality of patterning, phonological evolution crucially requires the emergence of effectively non-meaningful combinatorial units. We suggest that vocal tract action gestures are well suited to play a direct role in phonological evolution because, as argued by Studdert-Kennedy (2002a), they are Action to Language via the Mirror Neuron System, ed. Michael A.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.