Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader-follower (LF) pairs was higher than that for the follower-follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leaderinitiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leaderinitiated communication was significantly correlated with the leaders' communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by highlevel neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.eadership is a ubiquitous feature of all social species, including human and nonhuman animals (1, 2). However, the neural mechanism of leader emergence is still not well-understood. Evolutionary theories suggest that, whereas both human and nonhuman animals have evolved tendencies to compete for dominance over access to survival-related resources (3-5), human leaders also play an important role in maintaining group cohesion (6). Thus, human leaders need to take into account not only their own needs but also the needs of their followers to facilitate cooperation among group members (7-9). Interestingly, recent imaging evidence indicates that the neural activities of two individuals are more synchronized when they perform a cooperative rather than a competitive task (10). Moreover, the level of interpersonal neural synchronization (INS) is closely associated with the level of understanding between partners (11). It is unknown, however, whether INS is involved in leader emergence.Previous evidence has shown that communication plays an important role in the increase of INS (12). However, the role of communication in leader emergence has been extensively debated. On the one hand, the so-called "babble" hypothesis postulates that the most t...
The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.
Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in-speech comprehension.
Pre-stimulus alpha (8–12 Hz) and beta (16–20 Hz) oscillations have been frequently linked to the prediction of upcoming sensory input. Do these frequency bands serve as a neural marker of linguistic prediction as well? We hypothesized that if pre-stimulus alpha and beta oscillations index language predictions, their power should monotonically relate to the degree of predictability of incoming words based on past context. We expected that the more predictable the last word of a sentence, the stronger the alpha and beta power modulation. To test this, we measured neural responses with magnetoencephalography of healthy individuals during exposure to a set of linguistically matched sentences featuring three levels of sentence context constraint (high, medium and low constraint). We observed fluctuations in alpha and beta power before last word onset, and modulations in M400 amplitude after last word onset. The M400 amplitude was monotonically related to the degree of context constraint, with a high constraining context resulting in the strongest amplitude decrease. In contrast, pre-stimulus alpha and beta power decreased more strongly for intermediate constraints, followed by high and low constraints. Therefore, unlike the M400, pre-stimulus alpha and beta dynamics were not indexing the degree of word predictability from sentence context.
Within the sensory domain, alpha/beta oscillations have been frequently linked to the prediction of upcoming sensory input. Here, we investigated whether oscillations at these frequency bands serve as a neural marker in the context of linguistic input prediction as well.Specifically, we hypothesized that if alpha/beta oscillations do index language prediction, their power should modulate during sentence processing, indicating stronger engagement of underlying neuronal populations involved in the linguistic prediction process. Importantly, the modulation should monotonically relate to the degrees of predictability of incoming words based on past context. Specifically, we expected that the more predictable the last word of a sentence, the stronger the alpha/beta power modulation. To test this, we measured neural responses with magnetoencephalography of healthy individuals (of either sex) during exposure to a set of linguistically matched sentences featuring three distinct levels of sentence context constraint (high, medium and low constraint). We observed fluctuations in alpha/beta power before last word onset, and also modulations in M400 amplitude after last word onset that are known to gradually relate to semantic predictability. In line with previous findings, the M400 amplitude was monotonically related to the degree of context constraint, with a high constraining context resulting in the strongest amplitude decrease. In contrast, alpha/beta power was non-monotonically related to context constraints. The strongest power decrease was observed for intermediate constraints, followed by high and low constraints. While the monotonous M400 amplitude modulation fits within a framework of prediction, the nonmonotonous oscillatory results are not easily reconciled with this idea. SIGNIFICANCE STATEMENTNeural activity in the alpha (8-10Hz) and beta (16-20) frequency ranges have been related to the prediction of upcoming sensory input. It remains still debated whether these frequency bands relate to language prediction as well. In this magnetoencephalography study, we recorded alpha/beta oscillatory activity while participants listened to sentences whose ending had varying degree of predictability based on past linguistic information. Our results show that alpha/beta power modulations were non-monotonically related to the degree of linguistic predictability: the strongest modulation of alpha/beta power was observed for intermediate levels of linguistic predictability during sentence reading. Together, the results emphasize that alpha/beta oscillations cannot directly be linked to predictability in language, but potentially relate to attention or control operations during language processing.
During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech in both theta and delta ranges. This suggests that acoustics greatly influence the neural tracking response to speech signals, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.