Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Inter- and intra-subject variability pose a major challenge to decoding human brain activity in brain-computer interfaces (BCIs) based on non-invasive electroencephalogram (EEG). Conventionally, a time-consuming and laborious training procedure is performed on each new user to collect sufficient individualized data, hindering the applications of BCIs on monitoring brain states (e.g. drowsiness) in real-world settings. This study proposes applying hierarchical clustering to assess the inter- and intra-subject variability within a large-scale dataset of EEG collected in a simulated driving task, and validates the feasibility of transferring EEG-based drowsiness-detection models across subjects. A subject-transfer framework is thus developed for detecting drowsiness based on a large-scale model pool from other subjects and a small amount of alert baseline calibration data from a new user. The model pool ensures the availability of positive model transferring, whereas the alert baseline data serve as a selector of decoding models in the pool. Compared with the conventional within-subject approach, the proposed framework remarkably reduced the required calibration time for a new user by 90% (18.00 min-1.72 ± 0.36 min) without compromising performance (p = 0.0910) when sufficient existing data are available. These findings suggest a practical pathway toward plug-and-play drowsiness detection and can ignite numerous real-world BCI applications.
To overcome the individual differences, an accurate electroencephalogram (EEG)-based emotion-classification system requires a considerable amount of ecological calibration data for each individual, which is labor-intensive and time-consuming. Transfer learning (TL) has drawn increasing attention in the field of EEG signal mining in recent years. The TL leverages existing data collected from other people to build a model for a new individual with little calibration data. However, brute-force transfer to an individual (i.e., blindly leveraged the labeled data from others) may lead to a negative transfer that degrades performance rather than improving it. This study thus proposed a conditional TL (cTL) framework to facilitate a positive transfer (improving subject-specific performance without increasing the labeled data) for each individual. The cTL first assesses an individual’s transferability for positive transfer and then selectively leverages the data from others with comparable feature spaces. The empirical results showed that among 26 individuals, the proposed cTL framework identified 16 and 14 transferable individuals who could benefit from the data from others for emotion valence and arousal classification, respectively. These transferable individuals could then leverage the data from 18 and 12 individuals who had similar EEG signatures to attain maximal TL improvements in valence- and arousal-classification accuracy. The cTL improved the overall classification performance of 26 individuals by ~15% for valence categorization and ~12% for arousal counterpart, as compared to their default performance based solely on the subject-specific data. This study evidently demonstrated the feasibility of the proposed cTL framework for improving an individual’s default emotion-classification performance given a data repository. The cTL framework may shed light on the development of a robust emotion-classification model using fewer labeled subject-specific data toward a real-life affective brain-computer interface (ABCI).
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
Recent advances in mobile electroencephalogram (EEG) systems, featuring non-prep dry electrodes and wireless telemetry, have enabled and promoted the applications of mobile brain-computer interfaces (BCIs) in our daily life. Since the brain may behave differently while people are actively situated in ecologically-valid environments versus highly-controlled laboratory environments, it remains unclear how well the current laboratory-oriented BCI demonstrations can be translated into operational BCIs for users with naturalistic movements. Understanding inherent links between natural human behaviors and brain activities is the key to ensuring the applicability and stability of mobile BCIs. This study aims to assess the quality of steady-state visual-evoked potentials (SSVEPs), which is one of promising channels for functioning BCI systems, recorded using a mobile EEG system under challenging recording conditions, e.g., walking. To systematically explore the effects of walking locomotion on the SSVEPs, this study instructed subjects to stand or walk on a treadmill running at speeds of 1, 2, and 3 mile (s) per hour (MPH) while concurrently perceiving visual flickers (11 and 12 Hz). Empirical results of this study showed that the SSVEP amplitude tended to deteriorate when subjects switched from standing to walking. Such SSVEP suppression could be attributed to the walking locomotion, leading to distinctly deteriorated SSVEP detectability from standing (84.87 ± 13.55%) to walking (1 MPH: 83.03 ± 13.24%, 2 MPH: 79.47 ± 13.53%, and 3 MPH: 75.26 ± 17.89%). These findings not only demonstrated the applicability and limitations of SSVEPs recorded from freely behaving humans in realistic environments, but also provide useful methods and techniques for boosting the translation of the BCI technology from laboratory demonstrations to practical applications.
This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and θ-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and θ-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and θ powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.
The current assessment of visual field loss in diseases such as glaucoma is affected by the subjectivity of patient responses and the lack of portability of standard perimeters.OBJECTIVE To describe the development and initial validation of a portable brain-computer interface (BCI) for objectively assessing visual function loss. DESIGN, SETTING, AND PARTICIPANTSThis case-control study involved 62 eyes of 33 patients with glaucoma and 30 eyes of 17 healthy participants. Glaucoma was diagnosed based on a masked grading of optic disc stereophotographs. All participants underwent testing with a BCI device and standard automated perimetry (SAP) within 3 months. The BCI device integrates wearable, wireless, dry electroencephalogram and electrooculogram systems and a cellphone-based head-mounted display to enable the detection of multifocal steady state visual-evoked potentials associated with visual field stimulation. The performances of global and sectoral multifocal steady state visual-evoked potentials metrics to discriminate glaucomatous from healthy eyes were compared with global and sectoral SAP parameters. The repeatability of the BCI device measurements was assessed by collecting results of repeated testing in 20 eyes of 10 participants with glaucoma for 3 sessions of measurements separated by weekly intervals. MAIN OUTCOMES AND MEASURESReceiver operating characteristic curves summarizing diagnostic accuracy. Intraclass correlation coefficients and coefficients of variation for assessing repeatability. RESULTS Among the 33 participants with glaucoma, 19 (58%) were white, 12 (36%) were black, and 2 (6%) were Asian, while among the 17 participants with healthy eyes, 9 (53%) were white, 8 (47%) were black, and none were Asian. The receiver operating characteristic curve area for the global BCI multifocal steady state visual-evoked potentials parameter was 0.92 (95% CI, 0.86-0.96), which was larger than for SAP mean deviation (area under the curve, 0.81; 95% CI, 0.72-0.90), SAP mean sensitivity (area under the curve, 0.80; 95% CI, 0.69-0.88; P = .03), and SAP pattern standard deviation (area under the curve, 0.77; 95% CI, 0.66-0.87; P = .01). No statistically significant differences were seen for the sectoral measurements between the BCI and SAP. Intraclass coefficients for global and sectoral parameters ranged from 0.74 to 0.92, and mean coefficients of variation ranged from 3.03% to 7.45%. CONCLUSIONS AND RELEVANCEThe BCI device may be useful for assessing the electrical brain responses associated with visual field stimulation. The device discriminated eyes with glaucomatous neuropathy from healthy eyes in a clinically based setting. Further studies should investigate the feasibility of the BCI device for home-based testing as well as for detecting visual function loss over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.