In a seminal study, Bott & Noveck (2004) found that the computation of the scalar inference of 'some' implying 'not all' was associated with increased sentence verification times, suggesting a processing cost. Recently, van Tiel and colleagues (2019b) hypothesised that the presence of this processing cost critically depends on the polarity of the scalar word. We comprehensively evaluated this polarity hypothesis on the basis of a sentence-picture verification task in which we tested the processing of 16 types of adjectival scalar inferences. We develop a quantitative measure of adjectival polarity which combines insights from linguistics and psychology. In line with the polarity hypothesis, our measure of polarity reliably predicted the presence or absence of a processing cost (i.e., an increase in sentence verification times). We conclude that the alleged processing cost for scalar inferencing in verification tasks is not due to the process of drawing a scalar inference, but rather to the cognitive difficulty of verifying negative information.
Scalar inferences occur when a weaker statement like It’s warm is used when a stronger one like It’s hot could have been used instead, resulting in the inference that whoever produced the weaker statement believes that the stronger statement does not hold. The rate at which this inference is drawn varies across scalar words, a result termed ‘scalar diversity’. Here, we study scalar diversity in adjectival scalar words from a usage-based perspective. We introduce novel operationalisations of several previously observed predictors of scalar diversity using computational tools based on usage data, allowing us to move away from existing judgment-based methods. In addition, we show in two experiments that, above and beyond these previously observed predictors, scalar diversity is predicted in part by the relevance of the scalar inference at hand. We introduce a corpus-based measure of relevance based on the idea that scalar inferences that are more relevant are more likely to occur in scalar constructions that draw an explicit contrast between scalar words (e.g., It’s warm but not hot). We conclude that usage has an important role to play in the establishment of common ground, a requirement for pragmatic inferencing.
In the present review paper by members of the collaborative research center “Register: Language Users' Knowledge of Situational-Functional Variation” (CRC 1412), we assess the pervasiveness of register phenomena across different time periods, languages, modalities, and cultures. We define “register” as recurring variation in language use depending on the function of language and on the social situation. Informed by rich data, we aim to better understand and model the knowledge involved in situation- and function-based use of language register. In order to achieve this goal, we are using complementary methods and measures. In the review, we start by clarifying the concept of “register”, by reviewing the state of the art, and by setting out our methods and modeling goals. Against this background, we discuss three key challenges, two at the methodological level and one at the theoretical level: (1) To better uncover registers in text and spoken corpora, we propose changes to established analytical approaches. (2) To tease apart between-subject variability from the linguistic variability at issue (intra-individual situation-based register variability), we use within-subject designs and the modeling of individuals' social, language, and educational background. (3) We highlight a gap in cognitive modeling, viz. modeling the mental representations of register (processing), and present our first attempts at filling this gap. We argue that the targeted use of multiple complementary methods and measures supports investigating the pervasiveness of register phenomena and yields comprehensive insights into the cross-methodological robustness of register-related language variability. These comprehensive insights in turn provide a solid foundation for associated cognitive modeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.