Almost all attention and learning—in particular, most early learning—take place in social settings. But little is known of how our brains support dynamic social interactions. We recorded dual electroencephalography (EEG) from 12-month-old infants and parents during solo play and joint play. During solo play, fluctuations in infants’ theta power significantly forward-predicted their subsequent attentional behaviours. However, this forward-predictiveness was lower during joint play than solo play, suggesting that infants’ endogenous neural control over attention is greater during solo play. Overall, however, infants were more attentive to the objects during joint play. To understand why, we examined how adult brain activity related to infant attention. We found that parents’ theta power closely tracked and responded to changes in their infants’ attention. Further, instances in which parents showed greater neural responsivity were associated with longer sustained attention by infants. Our results offer new insights into how one partner influences another during social interaction.
Emotional communication between parents and children is crucial during early life, yet little is known about its neural underpinnings. Here, we adopt a dual-brain connectivity approach to assess how emotional valence modulates the parent-infant neural network.Fifteen mothers modelled positive and negative emotions toward pairs of objects during social interaction with their infants (aged 10.3 months) whilst their neural activity was concurrently measured using dual-EEG. Intra-brain and inter-brain network connectivity in the 6-9 Hz (infant Alpha) range was computed during maternal expression of positive and negative emotions using directed (partial directed coherence) and non-directed (phaselocking value) connectivity metrics. Graph theoretical metrics were used to quantify differences in network topology as a function of emotional valence. Inter-brain network indices (Density, Strength and Divisibility) consistently revealed that the integration of parents' and childrens' neural processes was significantly stronger during maternal demonstrations of positive than negative emotions. Further, directed inter-brain metrics indicated that mother-to-infant directional influences were stronger during the expression of positive than negative emotions. These results suggest that the parent-infant inter-brain network is modulated by the emotional quality and tone of dyadic social interactions, and that inter-brain graph metrics may be successfully applied to examine these changes in interpersonal network topology. (200/200 words) Keywords:EEG hyperscanning, network connectivity, graph theory, emotional expression, mother-infant interaction atypical patterns of EEG asymmetry, commonly showing higher right frontal EEG activity than controls (Gotlib et al., 1998). Recent research has also started to examine intraindividual network topology during emotion processing using graph-theoretic measures. For example, a recent study with adults showed that EEG graph-theoretic features performed better than traditionally used EEG features (such as spectral power and asymmetry) on the automatic classification of affective neural states (Gupta et al. , 2016).Behavioral and neuroimaging studies into early development suggest that the neural architecture for the detection and prioritized processing of emotional expressions, such as fear, emerges sometime during the first year of life (Hoehl, 2013; Hoehl et al.
Mental stress may cause cognitive dysfunctions, cardiovascular disorders and depression. Mental stress detection via short-term Heart Rate Variability (HRV) analysis has been widely explored in the last years, while ultra-short term (less than 5 minutes) HRV has been not. This study aims to detect mental stress using linear and non-linear HRV features extracted from 3 minutes ECG excerpts recorded from 42 university students, during oral examination (stress) and at rest after a vacation. HRV features were then extracted and analyzed according to the literature using validated software tools. Statistical and data mining analysis were then performed on the extracted HRV features. The best performing machine learning method was the C4.5 tree algorithm, which discriminated between stress and rest with sensitivity, specificity and accuracy rate of 78%, 80% and 79% respectively.
Memory reactivation during sleep is critical for consolidation, but also extremely difficult to measure as it is subtle, distributed and temporally unpredictable. This article reports a novel method for detecting such reactivation in standard sleep recordings. During learning, participants produced a complex sequence of finger presses, with each finger cued by a distinct audio-visual stimulus. Auditory cues were then re-played during subsequent sleep to trigger neural reactivation through a method known as targeted memory reactivation (TMR). Next, we used electroencephalography data from the learning session to train a machine learning classifier, and then applied this classifier to sleep data to determine how successfully each tone had elicited memory reactivation. Neural reactivation was classified above chance in all participants when TMR was applied in SWS, and in 5 of the 14 participants to whom TMR was applied in N2. Classification success reduced across numerous repetitions of the tone cue, suggesting either a gradually reducing responsiveness to such cues or a plasticity-related change in the neural signature as a result of cueing. We believe this method will be valuable for future investigations of memory consolidation.
Phase synchronisation between different neural groups is considered an important source of information to understand the underlying mechanisms of brain cognition. This Letter investigated phase-synchronisation patterns from electroencephalogram (EEG) signals recorded from ten healthy participants performing motor imagery (MI) tasks using schematic emotional faces as stimuli. These phase-synchronised states, named synchrostates, are specific for each cognitive task performed by the user. The maximum and minimum number of occurrence states were selected for each subject and task to extract the connectivity network measures based on graph theory to feed a set of classification algorithms. Two MI tasks were successfully classified with the highest accuracy of 85% with corresponding sensitivity and specificity of 85%. In this work, not only the performance of different supervised learning techniques was studied, as well as the optimal subset of features to obtain the best discrimination rates. The robustness of this classification method for MI tasks indicates the possibility of expanding its use for online classification of the brain–computer interface (BCI) systems.
We present a framework for P300 ERP classification on the 2019 IFMBE competition dataset using a combination of a Riemannian geometry and ensemble learning. Covariance matrices and ERP prototypes are extracted after the EEG is passed through a filter bank and an ensemble of LDA classifiers is trained on subsets of channels, trials, and frequencies. The model selects a final class based on maximum probability of evidence from all ensembles. Our pipeline achieves an average classification accuracy of 81.2% on the test set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.