In this paper, we provide an in-depth cognitive analysis of a specific humor strategy we coin “trumping”, a multi-agent language game that revolves around the subversion of the linguistic forms of exchange. In particular, we illustrate how, in a conversational setting, agents can “reflect” and “distort” the linguistic-conceptual construal of each others' utterances. Because this reflection or parallelism in the trumping game can be situated on different levels of linguistic organization, a multi-dimensional semantic-pragmatic account is proposed. Using insights from cognitive linguistics, we show that adversarial agents exploit the conceptual mechanisms underlying the opponent's utterances in order to turn the tables in the humor game. In doing so, an agent can trump an adversary by demonstrating a “hyper-understanding” of the lexico-conceptual meaning of an opponent's utterance. This subversion of construal operations like metaphor, metonymy and salience leads to a sudden manipulation of the discourse space that has been set up in the previous utterance(s) (Langacker 2001). In general, by providing an analysis in terms of basic principles of semantic construal, we argue that a cognitive linguistic treatment of humor has an ecological validity that is lacking in most linguistic humor research.
In this paper, we take a Construction Grammar approach to Du Bois' concept of resonance activation. We suggest that the structural mapping relations between juxtaposed utterances in discourse, described in terms of diagraphs in dialogic syntax, can acquire the status of ad hoc constructions or locally entrenched form-meaning pairings within the boundaries of an ongoing conversation. We argue that the local emergence of these ad hoc constructions involves the same cognitive mechanism described for the abstraction of conventional grammatical constructions from usage patterns. Accordingly, we propose to broaden the scope of Construction Grammar to include not only symbolic units that are conventionalized in a larger speech community, but also a dimension of online syntax, i.e. the emergence of grammatical patterns at the micro-level of a single conversation. Drawing on dialogic data from political talk shows and parliamentary debates, we illustrate the spectrum of these ad hoc constructional routines and show their local productivity, which we take as an indication of their (micro-)entrenchment within a given conversation.
In this paper we present the outlines of a new project that aims at developing and implementing effective new methods for analyzing gaze data collected with mobile eyetracking devices. More specifically, we argue for the integration of object recognition algorithms from vision engineering, such as invariant region matching techniques, in gaze analysis software. We present a series of arguments why an object-based approach may provide a significant surplus, in terms of analytical precision, flexibility, additional application areas and cost efficiency, to the existing systems that use predefined areas of analysis.In order to test the actual analytical power of object recognition algorithms for the analysis of gaze data recorded in the wild, we develop a series of test cases in different real world situations, including shopping behavior, navigation, handling and usability of mobile systems. By setting up these case studies in close collaboration with key players in the relevant fields (retailers, signage consultants, market and user-experience research, and developers of eye-tracking hard-and software), we will be able to sketch an accurate picture of the pros and cons of the proposed method in comparison to current analytical practice.
In this paper, we present an embodiment perspective on viewpoint by exploring the role of eye gaze in face-to-face conversation, in relation to and interaction with other expressive modalities. More specifically, we look into gaze patterns, as well as gaze synchronization with speech, as instruments in the negotiation of participant roles in interaction. In order to obtain fine-grained information on the different modalities under scrutiny, we used the InSight Interaction Corpus (Brône, Geert & Bert Oben. 2015. Insight Interaction: A multimodal and multifocal dialogue corpus. Language Resources and Evaluation 49, 195–214.). This multimodal video corpus consists of two- and three-party interactions (in Dutch), with head-mounted scene cameras and eye-trackers tracking all participants’ visual behavior, providing a unique ‘speaker-internal’ perspective on the conversation. The analysis of interactional sequences from the corpus (dyads and triads) reveals specific patterns of gaze distribution related to the temporal organization of viewpoint in dialogue. Different dialogue acts typically display specific gaze events at crucial points in time, as, e.g., in the case of brief gaze aversion associated with turn-holding, and shared gaze between interlocutors at the critical point of turn-taking. In addition, the data show a strong correlation and temporal synchronization between eye gaze and speech in the realization of specific dialogue acts, as shown by means of a series of cross-recurrence analyses for specific turn-holding mechanisms (e.g., verbal fillers co-occurring with brief moments of gaze aversion).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.