In this paper we present the outlines of a new project that aims at developing and implementing effective new methods for analyzing gaze data collected with mobile eyetracking devices. More specifically, we argue for the integration of object recognition algorithms from vision engineering, such as invariant region matching techniques, in gaze analysis software. We present a series of arguments why an object-based approach may provide a significant surplus, in terms of analytical precision, flexibility, additional application areas and cost efficiency, to the existing systems that use predefined areas of analysis.In order to test the actual analytical power of object recognition algorithms for the analysis of gaze data recorded in the wild, we develop a series of test cases in different real world situations, including shopping behavior, navigation, handling and usability of mobile systems. By setting up these case studies in close collaboration with key players in the relevant fields (retailers, signage consultants, market and user-experience research, and developers of eye-tracking hard-and software), we will be able to sketch an accurate picture of the pros and cons of the proposed method in comparison to current analytical practice.
In this paper, we present an embodiment perspective on viewpoint by exploring the role of eye gaze in face-to-face conversation, in relation to and interaction with other expressive modalities. More specifically, we look into gaze patterns, as well as gaze synchronization with speech, as instruments in the negotiation of participant roles in interaction. In order to obtain fine-grained information on the different modalities under scrutiny, we used the InSight Interaction Corpus (Brône, Geert & Bert Oben. 2015. Insight Interaction: A multimodal and multifocal dialogue corpus. Language Resources and Evaluation 49, 195–214.). This multimodal video corpus consists of two- and three-party interactions (in Dutch), with head-mounted scene cameras and eye-trackers tracking all participants’ visual behavior, providing a unique ‘speaker-internal’ perspective on the conversation. The analysis of interactional sequences from the corpus (dyads and triads) reveals specific patterns of gaze distribution related to the temporal organization of viewpoint in dialogue. Different dialogue acts typically display specific gaze events at crucial points in time, as, e.g., in the case of brief gaze aversion associated with turn-holding, and shared gaze between interlocutors at the critical point of turn-taking. In addition, the data show a strong correlation and temporal synchronization between eye gaze and speech in the realization of specific dialogue acts, as shown by means of a series of cross-recurrence analyses for specific turn-holding mechanisms (e.g., verbal fillers co-occurring with brief moments of gaze aversion).
This contribution focuses on verbal amplifiers and comical hypotheticals in a corpus of face-to-face interactions. Both phenomena qualify as markers of a mental viewpoint expressing an (inter)subjective construal of a certain experience. Whereas amplifiers offer a straightforward view onto a speaker’s evaluative stance, comical hypotheticals provide an intersubjective account of a viewpoint construal. As part of their meaning, their use reveals a speaker’s assumption about the interlocutor willing to allow or participate in a particular type of interactional humor. Our research interest for these phenomena concerns their occurrence as well as their interactional alignment in terms of mimicry behavior. In order to capture the impact of both linguistic and psychological variables in the use of these items, we adopt a differentiated methodological approach, which allows to correlate findings from our corpus linguistic analysis with the values obtained for interpersonal difference variables. As our data consists of male dyads of which the participants never met before the beginning of their conversation, we expected to witness an increase, along with the growing familiarity among the interlocutors, in both the use and alignment of these viewpoint phenomena. Indeed, results show a clear increase in the use of both verbal amplifiers and comical hypotheticals over the course of the interaction and independently from the also observed overall increase of communicativeness. However, with respect to the alignment of both viewpoint phenomena, our study reveals a differentiated result. Participants aligned their use of verbal amplifiers with that of their partners over the course of the interaction, but they did not do so for comical hypotheticals. Yet, within the broader discussion of the experiment’s design, this unexpected result may still seem plausible with respect to our general hypothesis. Beyond the limits of this study, the set-up and results of our study nicely connect to recent research on empathy-related behavior in social neuroscience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.