2021
DOI: 10.1016/j.brainres.2021.147674
|View full text |Cite
|
Sign up to set email alerts
|

Can prediction and retrodiction explain whether frequent multi-word phrases are accessed ‘precompiled’ from memory or compositionally constructed on the fly?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 24 publications
3
2
0
Order By: Relevance
“…We propose that processing multiword units involves combining information about individual words and phrases. This is consistent with the large body of psycholinguistic evidence suggesting that language users predict upcoming words (see Baayen et al, 2013; Jacobs et al, 2016; Onnis & Huettig, 2021; N. J. Smith & Levy, 2013).…”
Section: Discussionsupporting
confidence: 87%
See 1 more Smart Citation
“…We propose that processing multiword units involves combining information about individual words and phrases. This is consistent with the large body of psycholinguistic evidence suggesting that language users predict upcoming words (see Baayen et al, 2013; Jacobs et al, 2016; Onnis & Huettig, 2021; N. J. Smith & Levy, 2013).…”
Section: Discussionsupporting
confidence: 87%
“…We found that the typology of language impacts language users' sensitivity to phrasal frequencies. Our findings align well with a single-system view of language processing (e.g., Baayen et al, 2013;Bybee, 1998;Christiansen & Chater, 2016;Onnis & Huettig, 2021) that combines information about individual words and larger units to explain language acquisition and processing. The findings are hard to align with proposals that multiword units are represented holistically (e.g., Wray, 2002).…”
Section: Discussionsupporting
confidence: 85%
“…Thus, the model can be seen as a procedural approach to error-based associative learning, with prediction as its driving mechanism. This is in line with a long tradition of work that identifies a central role for word prediction, both based on preceding information in a sentence, or even based on the words that appear after a target word (which is often referred to as 'retrodiction' or integration, (Federmeier, 2007;Kutas et al, 2011;Onnis & Thiessen, 2013;Huettig, 2015;Onnis & Huettig, 2021;Alhama et al, 2021;Onnis et al, 2022).…”
Section: Discussionsupporting
confidence: 78%
“…Likewise, many adjective–noun combinations appear to be more backward‐looking (i.e., being more informative when integrating preceding context) than predictive (i.e., involving preactivation of upcoming words), as in strong tea but not *powerful tea , or powerful resources but not *strong resources . In the above cases, what distinguishes the acceptability (and processability, see Onnis & Huettig, 2021 ) of a multiword sequence may not be so much the forward conditional probability, which is sensibly low for noise given both do some or make some . This happens because numerous words in the language can follow verbs like do and make , and thus the forward probability values can be empirically estimated in a corpus of language to be low.…”
Section: Introductionmentioning
confidence: 99%