Cognitive Approach to Natural Language Processing 2017
DOI: 10.1016/b978-1-78548-253-3.50010-x
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking n-grams, Topic Models and Recurrent Neural Networks by Cloze Completions, EEGs and Eye Movements

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(21 citation statements)
references
References 30 publications
1
20
0
Order By: Relevance
“…When the semantic distances between questions and answers are used to account for such associative judgments, word2vec can nicely account for this conjunction fallacy. Hofmann, Biemann, and Remus (2017) also relied on model when accounting for human cloze completion probabilities, as well as event-related potentials and eye movement parameters previously accounted for by cloze completions. The CBOW model not only accounted for N400 effects, but also for single-fixation durations, that is, for the duration of a single fixation that was sufficient for successfully recognizing a word.…”
Section: Word2vec Cosinementioning
confidence: 99%
“…When the semantic distances between questions and answers are used to account for such associative judgments, word2vec can nicely account for this conjunction fallacy. Hofmann, Biemann, and Remus (2017) also relied on model when accounting for human cloze completion probabilities, as well as event-related potentials and eye movement parameters previously accounted for by cloze completions. The CBOW model not only accounted for N400 effects, but also for single-fixation durations, that is, for the duration of a single fixation that was sufficient for successfully recognizing a word.…”
Section: Word2vec Cosinementioning
confidence: 99%
“…The second section is dedicated to latent semantic dimensions (Griffiths et al, 2007;Landauer & Dumais, 1997) that determine the co-occurrence of words in documents. Therefore, these models consolidate relatively long-range semantic relations (Hofmann, Biemann, & Remus, 2017), in contrast to short-range (preceding sentence context) consolidation dominant in the other two subsections.…”
Section: Language Models In Eye Movement Researchmentioning
confidence: 92%
“…Andrews, Vigliocco & Vinson, 2009), to our knowledge no other study so far addressed a topic model directly predicting reading times (but cf. Hofmann et al, 2017).…”
Section: Latent Semantic Dimensionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Frank (2009) found that an RNN provides significantly larger correlations with GD compared to a surprisal measure of the grammatical category of the word. An RNN, a topics model and a 5-gram model together explained not only about half of the variance of CCP, but they also performed significantly better than CCP in predicting SFD data (Hofmann, Biemann, & Remus, 2017).…”
Section: Introductionmentioning
confidence: 97%