2022
DOI: 10.1088/1741-2552/ac619a
|View full text |Cite
|
Sign up to set email alerts
|

Neural decoding of semantic concepts: a systematic literature review

Abstract: Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 204 publications
0
6
0
Order By: Relevance
“…In both binary and multiclass (8) unimodal classification, performance was consistently lower for fMRI data than for EEG. This is initially surprising as Ryba et al [13] observed that fMRI decoders show better performance on semantic categories with higher conceptual similarity, such as 'child', 'wife', 'father', than neuroimaging techniques with lower spatial resolution like EEG. We suggest that in this current study superior multiclass performance with EEG is facilitated by the temporal features of inner-speech, and EEG's high temporal resolution.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…In both binary and multiclass (8) unimodal classification, performance was consistently lower for fMRI data than for EEG. This is initially surprising as Ryba et al [13] observed that fMRI decoders show better performance on semantic categories with higher conceptual similarity, such as 'child', 'wife', 'father', than neuroimaging techniques with lower spatial resolution like EEG. We suggest that in this current study superior multiclass performance with EEG is facilitated by the temporal features of inner-speech, and EEG's high temporal resolution.…”
Section: Discussionmentioning
confidence: 98%
“…The simultaneous data from 19 subjects (age: 26.63±2. 13) was collected for overt and inner speech. The EEG and fNIRS signals were preprocessed and used to train CNN-GRU based deep learning architecture to predict the overt and inner speech.…”
Section: Introductionmentioning
confidence: 99%
“…e N e u r o A c c e p t e d M a n u s c r i p t previous computation stages (e.g., visual processing). While the majority of neural semantic decoders still focus on fMRI data (Rybář & Daly, 2022), MEG data contains rich information that can be exploited with multivariate analyses both from a temporal point of view and, when performed in source space, from a spatial point of view (e.g., Kietzmann et al (2019).…”
Section: Discussionmentioning
confidence: 99%
“…Brain encoding studies, where machine learning is used to predict patterns of brain activity by learning functions from computational representations-for example Abraham et al (2014), Grootswagers, Wardle, & Carlson (2017), Haxby et al (2001), Haxby, Connolly, Guntupalli, & others (2014), Haynes (2015), Kragel, Koban, Barrett, & Wager (2018), Lemm, Blankertz, Dickhaus, & Müller (2011), Naselaris, Kay, Nishimoto, & Gallant (2011), Pereira, Mitchell, & Botvinick (2009), Rybář & Daly (2022)-have recently started to make use of vectorial models of meaning proposed in NLP that have been shown to capture an extremely wide range of information involved with semantic processing (for comprehensive reviews, see Hale et al, 2022;Murphy, Wehbe, & Fyshe, 2018). In encoding, vectorial semantic representations open new possibilities for the investigation of semantic processing in the brain (Bruffaerts et al, 2019;Diedrichsen & Kriegeskorte, 2017;Kay, 2018;Kriegeskorte, Mur, & Bandettini, 2008;Naselaris & Kay, 2015).…”
Section: Brain Encodingmentioning
confidence: 99%