2020
DOI: 10.1177/1747021820947940
|View full text |Cite
|
Sign up to set email alerts
|

Does comprehension (sometimes) go wrong for noncanonical sentences?

Abstract: This article addresses the question of whether the human parsing mechanism (HPM) derives sentence meaning always from representations that are computed algorithmically or whether the HPM sometimes resorts to non-algorithmic strategies that may result in misinterpretations. Misinterpretation effects for noncanonical sentences, such as passives, constitute important evidence in favour of models allowing for nonveridical representations. However, it is unclear whether these effects reflect errors in the mapping o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

9
26
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(37 citation statements)
references
References 52 publications
9
26
2
Order By: Relevance
“…Our own study and converging evidence from other work (e.g. Meng & Bader, 2021) similarly suggests that it is not the parsing of a non-canonical sentence that is good-enough, so much as the memory processes involved in retrieving information from the representation of that sentence.…”
Section: Discussionsupporting
confidence: 70%
See 2 more Smart Citations
“…Our own study and converging evidence from other work (e.g. Meng & Bader, 2021) similarly suggests that it is not the parsing of a non-canonical sentence that is good-enough, so much as the memory processes involved in retrieving information from the representation of that sentence.…”
Section: Discussionsupporting
confidence: 70%
“…The presence of such an effect would support the idea that initial interpretations of such sentences are influenced by fast-and-frugal heuristics rather than derived purely from a detailed syntactic analysis. In contrast, the absence of an effect would support accounts in which the initial parse is algorithmic, and misinterpretations only occur due to information being retrieved from this representation in response to specific cues (Bader & Meng, 2018;Meng & Bader, 2021). Participants read implausible sentences presented in canonical or noncanonical form, followed by an Algorithmically Consistent or Good-Enough Consistent sentence.…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation
“…In our offline comprehension data, we found participants to be less accurate for passives than actives independent of predicate type. This is compatible with previous results collected in our lab and in the broader literature (Ferreira, 2003;Meng & Bader, 2021), which consistently found passives to be more errorful than actives, independent of predicate type.…”
Section: Passivisation In Offline Processingsupporting
confidence: 93%
“…In the comprehension experiments that balanced the voice of the comprehension question, only the sentences with subject-experiencers resulted in passives being less accurate than the active (Paolazzi et al, 2021). Evidence that theta-role questions demonstrate a passivisation difficulty that is unobserved in other measures is corroborated by additional studies that combined plausibility ratings with comprehension questions (Meng & Bader, 2021).…”
Section: Recent Evidence Against Passive Difficulty In Online Measuresmentioning
confidence: 82%