Working together feels easier with some people than with others. We asked participants to perform a visual search task either alone or with a partner while simultaneously measuring each participant's EEG. Local phase synchronization and inter-brain phase synchronization were generally higher when subjects jointly attended to a visual search task than when they attended to the same task individually. Some participants searched the visual display more efficiently and made faster decisions when working as a team, whereas other dyads did not benefit from working together. These inter-team differences in behavioral performance gain in the visual search task were reliably associated with inter-team differences in local and inter-brain phase synchronization. Our results suggest that phase synchronization constitutes a neural correlate of social facilitation, and may help to explain why some teams perform better than others.
Research in a number of related fields has recently begun to focus on the perceptual, cognitive, and motor workings of cooperative behavior. There appears to be enough coherence in these efforts to refer to the study of the mechanisms underlying human cooperative behavior as the field of joint-action (Knoblich, Butterfill, & Sebanz, 2011; Sebanz, Bekkering, & Knoblich, 2006). Yet, the development of theory in this field has not kept pace with the proliferation of research findings. We propose a hierarchical predictive framework for the study of joint-action that we call the predictive joint-action model (PJAM). The presentation of this theoretical framework is organized into three sections. In the first section, we summarize hierarchical predictive principles and discuss their application to joint-action. In the second section, we juxtapose PJAM's assumptions with empirical evidence from the current literature on joint-action. In the third section, we discuss the overall success of the hierarchical predictive approach to account for the burgeoning empirical literature on joint-action research. Finally, we consider the model's capacity to generate novel and testable hypotheses about joint-action. This is done with the larger goal of uncovering the empirical and theoretical pieces that are still missing in a comprehensive understanding of joint action.
Studies of social perception report acute human sensitivity to where another's attention is aimed. Here we ask whether humans are also sensitive to how the other's attention is deployed. Observers viewed videos of actors reaching to targets without knowing that those actors were sometimes choosing to reach to one of the targets (endogenous control) and sometimes being directed to reach to one of the targets (exogenous control). Experiments 1 and 2 showed that observers could respond more rapidly when actors chose where to reach, yet were at chance when guessing whether the reach was chosen or directed. This implicit sensitivity to attention control held when either actor's faces or limbs were masked (experiment 3) and when only the earliest actor's movements were visible (experiment 4). Individual differences in sensitivity to choice correlated with an independent measure of social aptitude. We conclude that humans are sensitive to attention control through an implicit kinematic process linked to empathy. The findings support the hypothesis that social cognition involves the predictive modeling of others' attentional states.social perception | attention | action prediction | autism spectrum | action observation
Although behavioral therapies are effective for posttraumatic stress disorder (PTSD), access for patients is limited. Attention-bias modification (ABM), a cognitive-training intervention designed to reduce attention bias for threat, can be broadly disseminated using technology. We remotely tested an ABM mobile app for PTSD. Participants ( N = 689) were randomly assigned to personalized ABM, nonpersonalized ABM, or placebo training. ABM was a modified dot-probe paradigm delivered daily for 12 sessions. Personalized ABM included words selected using a recommender algorithm. Placebo included only neutral words. Primary outcomes (PTSD and anxiety) and secondary outcomes (depression and PTSD clusters) were collected at baseline, after training, and at 5-week-follow-up. Mechanisms assessed during treatment were attention bias and self-reported threat sensitivity. No group differences emerged on outcomes or attention bias. Nonpersonalized ABM showed greater declines in self-reported threat sensitivity than placebo ( p = .044). This study constitutes the largest mobile-based trial of ABM to date. Findings do not support the effectiveness of mobile ABM for PTSD.
The exploration of a familiar object by hand can benefit its identification by eye. What is unclear is how much this multisensory cross-talk reflects shared shape representations versus generic semantic associations. Here, we compare several simultaneous priming conditions to isolate the potential contributions of shape and semantics in haptic-to-visual priming. Participants explored a familiar object manually (haptic prime) while trying to name a visual object that was gradually revealed in increments of spatial resolution. Shape priming was isolated in a comparison of identity priming (shared semantic category and shape) with category priming (same category, but different shapes). Semantic priming was indexed by the comparisons of category priming with unrelated haptic primes. The results showed that both factors mediated priming, but that their relative weights depended on the reliability of the visual information. Semantic priming dominated in Experiment 1, when participants were free to use high-resolution visual information, but shape priming played a stronger role in Experiment 2, when participants were forced to respond with less reliable visual information. These results support the structural description hypothesis of haptic-visual priming (Reales and Ballesteros in J Exp Psychol Learn Mem Cogn 25:644-663, 1999) and are also consistent with the optimal integration theory (Ernst and Banks in Nature 415:429-433, 2002), which proposes a close coupling between the reliability of sensory signals and their weight in decision making.
Skilled jazz musicians are adept at coordinating their musical actions to produce an auditory outcome that is more than the sum of its parts. Whereas previous studies have investigated the cognitive mechanisms supporting ensemble music production, the present study focuses on the perception of this collaboration. The stimuli in this study were recorded duets of improvised New Orleans jazz standards, varying in the opportunity musicians were given for collaboration, from fully live performances (2-way feedback), to studio dubbed performances (1-way feedback), to studio mixes (no feedback). Participants listened to these duets in a random order and either made an explicit judgment of whether or not they were live recordings (Experiment 1) or rated the recordings on four dimensions of musicality (Experiment 2). Participants in both experiments were also categorized according to their social aptitude (Autism Quotient) and according to their musical training (Musical Expertise Questionnaire). The results showed that many listeners are sensitive to musical collaboration in this setting, and among listeners with the least musical training this sensitivity was linked to their social aptitude. These findings demonstrate that the human ability to assess the quality of a social interaction (Blakemore & Decety, 2001) is present even when the interaction is auditory, nonverbal, and in a medium in which the listeners themselves are not skilled. They also imply an important link between social aptitude and the ability to perceive the quality of a musical interaction (Phillips-Silver & Keller, 2012).
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner’s actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action–reaction interaction between two actors. At the start of each action–reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor’s reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action–reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action–reaction) versus one actor offers to shake hands, and the other leans down (incongruent action–reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action–reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.