2021
DOI: 10.31234/osf.io/rbych
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Context-Based Facilitation of Semantic Access Follows Both Logarithmic and Linear Functions of Stimulus Probability

Abstract: Stimuli are easier to process when the preceding context (e.g., a sentence, in the case of a word) makes them predictable. However, it remains unclear whether context-based facilitation arises due to predictive preactivation of a limited set of relatively probable upcoming stimuli (with facilitation then linearly related to probability) or, instead, arises because the system maintains and updates a probability distribution across all items, as posited by accounts (e.g., surprisal theory) assuming a logarithmic… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 83 publications
1
6
0
Order By: Relevance
“…However, the CP of adjectives is notoriously difficult to estimate using behavioral measures, as participants do not generally provide an adjective as a likely continuation of a sentence; see, for example, Boudewyn et al (2015). Here, we instead estimated the predictability of the adjectives using GPT-2 (1558M parameter version), a state-of-the-art Transformer-based neural network model of language (Radford et al, 2018; see Szewczyk & Federmeier, 2021 for validation of this technique as a proxy for CPs). We estimated the log-probability of our adjectives as continuations of the sentences and then tested whether these log-probabilities predict the amplitude of the N400 (see Figure 7).…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…However, the CP of adjectives is notoriously difficult to estimate using behavioral measures, as participants do not generally provide an adjective as a likely continuation of a sentence; see, for example, Boudewyn et al (2015). Here, we instead estimated the predictability of the adjectives using GPT-2 (1558M parameter version), a state-of-the-art Transformer-based neural network model of language (Radford et al, 2018; see Szewczyk & Federmeier, 2021 for validation of this technique as a proxy for CPs). We estimated the log-probability of our adjectives as continuations of the sentences and then tested whether these log-probabilities predict the amplitude of the N400 (see Figure 7).…”
Section: Resultsmentioning
confidence: 99%
“…As in the analysis of updating at the adjective (Kullback-Leibler divergence), we conducted a sensitivity analysis. A reanalysis of four experiments that used similar sentences (Szewczyk & Federmeier, 2021) showed that log(p) had a mean effect = 0.22 for target words that were mostly nouns. For adjectives, we expected a similar or a weaker effect.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations