For more than a decade, views of sentence comprehension have been shifting toward wider acceptance of a role for linguistic pre-processing—that is, anticipation, expectancy, (neural) pre-activation, or prediction—of upcoming semantic content and syntactic structure. In this survey, we begin by examining the implications of each of these “brands” of predictive comprehension, including the issue of potential costs and consequences to not encountering highly constrained sentence input. We then describe a number of studies (many using online methodologies) that provide results consistent with prospective sensitivity to various grains and levels of semantic and syntactic information, acknowledging that such pre-processing is likely to occur in other linguistic and extralinguistic domains, as well. This review of anticipatory findings also includes some discussion on the relationship of priming to prediction. We conclude with a brief examination of some possible limits to prediction, and with a suggestion for future work to probe whether and how various strands of prediction may integrate during real-time comprehension.
During reading, effects of contextual support indexed by N400-a brain potential sensitive to semantic activation/retrieval-amplitude are presumably mediated by comprehenders' world knowledge. Moreover, variability in knowledge may influence the contents, timing, and mechanisms of what is brought to mind during real-time sentence processing. Since it is infeasible to assess the entirety of each individual's knowledge, we investigated a limited domain-the narrative world of Harry Potter (HP). We recorded event-related brain potentials while participants read sentences ending in words more/less contextually supported. For sentences about HP, but not about general topics, contextual N400 effects were graded according to individual participants' HP knowledge. Our results not only confirm that context affects semantic processing by ~250 ms or earlier, on average, but empirically demonstrate what has until now been assumed-that N400 context effects are a function of each individual's knowledge, which here is highly correlated with their reading experience.
In Troyer and Kutas (2018), individual differences in knowledge of the world of Harry Potter (HP) rapidly modulated individuals' average electrical brain potentials to contextually supported words in sentence endings. Using advances in single-trial electroencephalogram analysis, we examined whether this relationship is strictly a result of domain knowledge mediating the proportion of facts each participant knew; we find it is not. Participants read sentences ending in a contextually supported word, reporting online whether they had known each fact. Participants' reports correlated with HP domain knowledge and reliably modulated event-related brain potentials to sentence-final words within 250 ms. Critically, domain knowledge had a dissociable influence in the same time window for endings that participants reported not having known and/or were less likely to be known/remembered across participants. We hypothesize that knowledge impacts written word processing primarily by affecting the neural processes of (implicit) retrieval from long-term memory (LTM): Greater knowledge eases otherwise difficult retrieval processes.
Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a) an intact point-light walker or (b) a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking). We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video) and the verb type (close or distant match) for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less-specific way.
Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip‐gram and continuous bag of words (CBOW) using word, context, and mean embeddings) on a theoretically motivated, rich set of semantic relations involving words from multiple syntactic classes and spanning the abstract–concrete continuum (19 sets of ratings). We found that, overall, the DSMs are best at capturing overall semantic similarity and also can capture verb–noun thematic role relations and noun–noun event‐based relations that play important roles in sentence comprehension. Interestingly, Skip‐gram and CBOW performed the best in terms of capturing similarity, whereas GloVe dominated the thematic role and event‐based relations. We discuss the theoretical and practical implications of our results, make recommendations for users of these models, and demonstrate significant differences in model performance on event‐based relations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.