Two visual-world eye-movement experiments investigated the nature of syntactic priming during comprehension--specifically, whether the priming effects in ditransitive prepositional object (PO) and double object (DO) structures (e.g., "The wizard will send the poison to the prince/the prince the poison?") are due to anticipation of structural properties following the verb (send) in the target sentence or to anticipation of animacy properties of the first postverbal noun. Shortly following the target verb onset, listeners looked at the recipient more (relative to the theme) following DO than PO primes, indicating that the structure of the prime affected listeners' eye gazes on the target scene. Crucially, this priming effect was the same irrespective of whether the postverbal nouns in the prime sentences did ("The monarch will send the painting to the president") or did not ("The monarch will send the envoy to the president") differ in animacy, suggesting that PO/DO priming in comprehension occurs because structural properties, rather than animacy features, are being primed when people process the ditransitive target verb.
We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners’ visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.
Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn’t) modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes), and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries). At the verb, people more often inspected the pancakes (the recent target) than the strawberries (the future target), thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants’ visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a) recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects), and (b) people prefer to first inspect a recent action target (vs. an object that will soon be acted upon), even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these findings.
In five structural-priming experiments, we investigated lexical boost effects in the production of ditransitive sentences. Although the residual activation model of Pickering and Branigan (1998) suggests that a lexical boost should only occur with the repetition of a syntactic licensing head in ditransitive prepositional object (PO)/double object (DO) structures, Scheepers, Raffray, and Myachykov (2017) recently found that it also occurs with the repetition of nouns that are not syntactic heads. We manipulated the repetition of the subject (Experiments 1-3), and the verb phrase (VP) internal arguments (i.e., either theme or recipient, Experiments 4-5) in PO/DO structures. In Experiment 2, the verb was also repeated between prime and target, while in the other experiments it was not. Three different tasks for eliciting the target were employed: picture description via the oral completion of a sentence fragment (Experiments 1-2, and 4), oral completion of a sentence fragment with no visual context (Experiment 3), and oral production of a sentence from a given array of words and no visual context (Experiment 5). Priming occurred in all experiments and was stronger when the verb was repeated (Experiment 2) than when it was not (Experiment 1). However, none of the experiments showed evidence that priming was stronger when either the subject or one of the VP-internal arguments were repeated. These findings support the view that structural information is associated with syntactic heads (i.e., the verb), but not with nonheads such as the subject noun and the VP-internal arguments (Pickering & Branigan, 1998).
Empathy can be defined as the ability to perceive and understand others' emotional states. Neuropsychological evidence has shown that humans empathize with each other to different degrees depending on factors such as their mood, personality, and social relationships. Although artificial agents have been endowed with features such as affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such features as factors that can modulate their empathic behavior. In this paper, we present and discuss the results of an empirical evaluation of a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. Our model is supported by psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.