When evaluating the properties of a set of elements in a natural environment, an increase in numerosity unavoidably corresponds to an increase in the physical properties of the set: Five apples differ from ten apples not only in numerosity, but also in their visual features, such as volume, density, and surface. Since nonsymbolic number processing is typically investigated through the presentation of arrays of elements, it is mandatory to keep track of the visual features characterizing the stimuli. A plethora of solutions have been proposed to address this complex methodological issue; yet, there is no agreed-upon standard for how to measure and control for visual features. Here we present the "customized ultraprecise standardization-oriented multipurpose" (CUSTOM) algorithm for generating nonsymbolic number stimuli. It is characterized by several core features: The absence of fixed parameters or rules-apart from geometrical constraints-lets the user freely manipulate the visual features of the stimuli; control over the visual features of the stimuli is extremely accurate; no modification is required in order to perform different types of manipulation; and users can re-create any set of stimuli described so far in previous experiments on numerical cognition, for a wide variety of tasks, including comparison, estimation, habituation, and match-to-sample. The CUSTOM algorithm could represent an asset in the field of numerical cognition, as a versatile instrument for effectively generating high-precision visual stimuli within an unbiased theoretical framework.
During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a “giving request” (give me) or a “taking request” (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed.
Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from manual gestures, gradually incorporating motor acts with vocal elements. In this evolutionary context, the human mirror mechanism (MM) would permit the passage from “doing something” to “communicating it to someone else.” In this perspective, the MM would mediate semantic processes being involved in both the execution and in the understanding of messages expressed by words or gestures. Thus, the recognition of action related words would activate somatosensory regions, reflecting the semantic grounding of these symbols in action information. Here, the role of the sensorimotor cortex and in general of the human MM on both language perception and understanding is addressed, focusing on recent studies on the integration between symbolic gestures and speech. We conclude documenting some evidence about MM in coding also the emotional aspects conveyed by manual, facial and body signals during communication, and how they act in concert with language to modulate other’s message comprehension and behavior, in line with an “embodied” and integrated view of social interaction.
Aim: Do the emotional content and meaning of sentences affect the kinematics of successive motor sequences?Material and Methods: Participants observed video-clips of an actor pronouncing sentences expressing positive or negative emotions and meanings (related to happiness or anger in Experiment 1 and food admiration or food disgust in Experiment 2). Then, they reached-to-grasp and placed a sugar lump on the actor’s mouth. Participants acted in response to sentences whose content could convey (1) emotion (i.e., face expression and prosody) and meaning, (2) meaning alone, or (3) emotion alone. Within each condition, the kinematic effects of sentences expressing positive and negative emotions were compared. Stimuli (positive for food admiration and negative for food disgust), conveyed either by emotion or meaning affected similarly the kinematics of both grasp and reach.Results: In Experiment 1, the kinematics did not vary between positive and negative sentences either when the content was expressed by both emotion and meaning, or meaning alone. In contrast, in the case of sole emotion, sentences with positive valence made faster the approach of the conspecific. In Experiment 2, the valence of emotions (positive for food admiration and negative for food disgust) affected the kinematics of both grasp and reach, independently of the modality.Discussion: The lack of an effect of meaning in Experiment 1 could be due to the weak relevance of sentence meaning with respect to the motor sequence goal (feeding). Experiment 2 demonstrated that, indeed, this was the case, because when the meaning and the consequent emotion were related to the sequence goal, they affected the kinematics. In contrast, the sole emotion activated approach or avoidance toward the actor according to positive and negative valence. The data suggest a behavioral dissociation between effects of emotion and meaning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.