Sensorimotor models suggest that understanding the emotional content of a face recruits a simulation process in which a viewer partially reproduces the facial expression in their own sensorimotor system. An important prediction of these models is that disrupting simulation should make emotion recognition more difficult. Here we used electroencephalogram (EEG) and facial electromyogram (EMG) to investigate how interfering with sensorimotor signals from the face influences the real-time processing of emotional faces. EEG and EMG were recorded as healthy adults viewed emotional faces and rated their valence. During control blocks, participants held a conjoined pair of chopsticks loosely between their lips. During interference blocks, participants held the chopsticks horizontally between their teeth and lips to generate motor noise on the lower part of the face. This noise was confirmed by EMG at the zygomaticus. Analysis of EEG indicated that faces expressing happiness or disgust-lower face expressions-elicited larger amplitude N400 when they were presented during the interference than the control blocks, suggesting interference led to greater semantic retrieval demands. The selective impact of facial motor interference on the brain response to lower face expressions supports sensorimotor models of emotion understanding.
The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre-registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
There is a lively and theoretically important debate about whether, how, and when embodiment contributes to language comprehension. This study addressed these questions by testing how interference with facial action impacts the brain's real-time response to emotional language. Participants read sentences about positive and negative events (e.g., "She reached inside the pocket of her coat from last winter and found some (cash/bugs) inside it.") while ERPs were recorded. Facial action was manipulated within participants by asking participants to hold chopsticks in their mouths using a position that allowed or blocked smiling, as confirmed by EMG. Blocking smiling did not influence ERPs to the valenced words (e.g., cash, bugs) but did influence ERPs to final words of sentences describing positive events. Results show that affectively positive sentences can evoke smiles and that such facial action can facilitate the semantic processing indexed by the N400 component. Overall, this study offers causal evidence that embodiment impacts some aspects of high-level comprehension, presumably involving the construction of the situation model.
Despite the rapid advance of additive manufacturing technologies in recent years, methods to fully encase objects with multilayer, thick features are still undeveloped. This issue can be overcome by printing layers conformally about an object’s natural geometry, as opposed to current methods that utilize planar layering. With this mindset, two new methods are derived to generate uniformly distributed layers between initial and desired geometries in both two and three dimensions. The first method is based on variable offset curves and can only be applied to convex or star-convex geometries. The second method is based on manipulated solutions to Laplace’s equation and is applicable to all geometries. Using each method, we present examples of layer generation for several geometries of varying convexities. Results are compared, and the respective advantages and limitations of each method are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.