2021
DOI: 10.3389/fnhum.2021.659410
|View full text |Cite
|
Sign up to set email alerts
|

Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

Abstract: Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity for this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a spe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 94 publications
(116 reference statements)
0
9
0
Order By: Relevance
“…Recent research has begun identifying shared computational principles between the way the human brain and DLMs represent and process natural language. In particular, several studies have used contextual embeddings derived from DLMs to successfully model human behavior as well as neural activity measured by fMRI, EEG, MEG, and ECoG during natural speech processing (Antonello et al, 2021; Caucheteux & King, 2022; Goldstein et al, 2022; Heilbron et al, 2020; Hollenstein et al, 2021; Schwartz et al, 2019; Toneva & Wehbe, 2019). Furthermore, recent studies have shown that similarly to DLMs, the brain incorporates prior context into the meaning of individual words (Jain & Huth, 2018; Caucheteux et al, 2021a; Schrimpf et al, 2021), spontaneously predicts forthcoming words (Goldstein, Zada et al, 2022), and computes post-word-onset prediction error signals (Donhauser & Baillet, 2020; Willems et al, 2016; Heilbron et al, 2020; Goldstein, Zada et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent research has begun identifying shared computational principles between the way the human brain and DLMs represent and process natural language. In particular, several studies have used contextual embeddings derived from DLMs to successfully model human behavior as well as neural activity measured by fMRI, EEG, MEG, and ECoG during natural speech processing (Antonello et al, 2021; Caucheteux & King, 2022; Goldstein et al, 2022; Heilbron et al, 2020; Hollenstein et al, 2021; Schwartz et al, 2019; Toneva & Wehbe, 2019). Furthermore, recent studies have shown that similarly to DLMs, the brain incorporates prior context into the meaning of individual words (Jain & Huth, 2018; Caucheteux et al, 2021a; Schrimpf et al, 2021), spontaneously predicts forthcoming words (Goldstein, Zada et al, 2022), and computes post-word-onset prediction error signals (Donhauser & Baillet, 2020; Willems et al, 2016; Heilbron et al, 2020; Goldstein, Zada et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Recent research has begun identifying shared computational principles between the way the human brain and DLMs represent and process natural language. In particular, several studies have used contextual embeddings derived from DLMs to successfully model human behavior as well as neural activity measured by fMRI, EEG, MEG, and ECoG during natural speech processing (1,2,(11)(12)(13)(14)(15). Furthermore, recent studies have shown that similarly to DLMs, the brain incorporates prior context into the meaning of individual words (2,3,(16)(17)(18), spontaneously predicts forthcoming words (1), and computes post-word-onset prediction error signals (1,13,19,20).…”
Section: Introductionmentioning
confidence: 99%
“…Using raw data has shown great promise to model eye-tracking data (e.g., Jäger et al, 2020), and one of the main advantages of the ZuCo dataset is that it allows feature extraction on different levels. Moreover, our EEG features include mean features aggregated over all electrodes as well as electrode-based frequency measures, which have been shown to improve NLP tasks in the past (Hollenstein et al, 2019a; Sun et al, 2020; Hollenstein et al, 2021b; Wang and Ji, 2021). Nonetheless, we want to highlight that preprocessed EEG data permits the examination of additional measures, such as source-level based features (e.g., source-level power estimates) and functional connectivity measures at the level of the underlying neuronal generators.…”
Section: Discussionmentioning
confidence: 99%
“…Related work on sentiment/emotion analysis from brain signals traditionally focuses on video or image stimulus (Koelstra et al 2011;Liu et al 2020). Previous attempts using text-elicited brain signals often treat brain signals only as an additional input along with other traditional modalities like text and image (Hollenstein et al 2021;Kumar, Yadava, and Roy 2019;Gauba et al 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Although covariate shift has been generally observed in brain signal data due to intra-and inter-subject variability (Lund et al 2005;Saha and Baumert 2020), previous work demonstrated promising transfer learning ability in brain signal decoding using deep learning models (Roy et al 2020;Zhang et al 2020;Makin, Moses, and Chang 2020). Furthermore, various studies (Muttenthaler, Hollenstein, and Barrett 2020;Hollenstein et al 2021Hollenstein et al , 2019Hale et al 2018;Schwartz and Mitchell 2019) have experimented with connecting brain signal decoding to NLP models, by either using brain signals as an additional modality for improving performance on NLP tasks or using NLP models to understand how the human brain encodes language.…”
Section: Introductionmentioning
confidence: 99%