Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18–25; 35–45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6–10; 18–25; 35–45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18–25; 35–45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).
It is well known that the observation of graspable objects recruits the same motor representations involved in their actual manipulation. Recent evidence suggests that the presentation of nouns referring to graspable objects may exert similar effects. So far, however, it is not clear to what extent the modulation of the motor system during object observation overlaps with that related to noun processing. To address this issue, 2 behavioral experiments were carried out using a go-no go paradigm. Healthy participants were presented with photos and nouns of graspable and non-graspable natural objects. Also scrambled images and pseudowords obtained from the original stimuli were used. At a go-signal onset (150 ms after stimulus presentation) participants had to press a key when the stimulus referred to a real object, using their right (Experiment 1) or left (Experiment 2) hand, and refrain from responding when a scrambled image or a pseudoword was presented. Slower responses were found for both photos and nouns of graspable objects as compared to non-graspable objects, independent of the responding hand. These findings suggest that processing seen graspable objects and written nouns referring to graspable objects similarly modulates the motor system.
Vision of the body is known to affect somatosensory perception (e.g. proprioception or tactile discrimination). However, it is unknown whether visual information about one's own body size can influence bodily action. We tested this by measuring the maximum grip aperture (MGA) parameter of grasping while eight subjects viewed a real size, enlarged or shrunken image of their hand reaching to grasp a cylinder. In the enlarged view condition, the MGA decreased relative to real size view, as if the grasping movement was actually executed with a physically larger hand, thus requiring a smaller grip aperture to grasp the cylinder. Interestingly, MGA remained smaller even after visual feedback was removed. In contrast, no effect was found for the reduced view condition. This asymmetry may reflect the fact that enlargement of body parts is experienced more frequently than shrinkage, notably during normal growth. In conclusion, vision of the body can significantly and persistently affect the internal model of the body used for motor programming.
It is an open question whether the motor system is involved during understanding of concrete nouns, as it is for concrete verbs. To clarify this issue, we carried out a behavioral experiment using a go-no go paradigm with an early and delayed go-signal delivery. Italian nouns referring to concrete objects (hand-related or foot-related) and abstract entities served as stimuli. Right-handed participants read the stimuli and responded when the presented word was concrete using the left or right hand. At the early go-signal, slower right-hand responses were found for hand-related nouns compared to foot-related nouns. The opposite pattern was found for the left hand. These findings demonstrate an early lateralized modulation of the motor system during noun processing, most likely crucial for noun comprehension.
According to embodied cognition, language processing relies on the same neural structures involved when individuals experience the content of language material. If so, processing nouns expressing a motor content presented in a second language should modulate the motor system as if presented in the mother tongue. We tested this hypothesis using a go-no go paradigm. Stimuli included English nouns and pictures depicting either graspable or non-graspable objects. Pseudo-words and scrambled images served as controls. Italian participants, fluent speakers of English as a second language, had to respond when the stimulus was sensitive and refrain from responding when it was not. As foreseen by embodiment, motor responses were selectively modulated by graspable items (images or nouns) as in a previous experiment where nouns in the same category were presented in the native language.
Can viewing our own body modified in size reshape the bodily representation employed for interacting with the environment? This question was addressed here by exposing participants to either an enlarged, a shrunken, or an unmodified view of their own hand in a reach-to-grasp task toward a target of fixed dimensions. When presented with a visually larger hand, participants modified the kinematics of their grasping movement by reducing maximum grip aperture. This adjustment was carried over even when the hand was rendered invisible in subsequent trials, suggesting a stable modification of the bodily representation employed for the action. The effect was specific for the size of the grip aperture, leaving the other features of the reach-to-grasp movement unaffected. Reducing the visual size of the hand did not induce the opposite effect, although individual differences were found, which possibly depended on the degree of subject's reliance on visual input. A control experiment suggested that the effect exerted by the vision of the enlarged hand could not be merely explained by simple global visual rescaling. Overall, our results suggest that visual information pertaining to the size of the body is accessed by the body schema and is prioritized over the proprioceptive input for motor control.
Some practical rules on combining TB colors are given to enhance the legibility of presentations, especially important for the legibility of projected texts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.