How do we understand the intentions of other people? There has been a longstanding controversy over whether it is possible to understand others’ intentions by simply observing their movements. Here, we show that indeed movement kinematics can form the basis for intention detection. By combining kinematics and psychophysical methods with classification and regression tree (CART) modeling, we found that observers utilized a subset of discriminant kinematic features over the total kinematic pattern in order to detect intention from observation of simple motor acts. Intention discriminability covaried with movement kinematics on a trial-by-trial basis, and was directly related to the expression of discriminative features in the observed movements. These findings demonstrate a definable and measurable relationship between the specific features of observed movements and the ability to discriminate intention, providing quantitative evidence of the significance of movement kinematics for anticipating others’ intentional actions.
In this paper, we propose an effective method for emergent leader detection in meeting environments which is based on nonverbal visual features. Identifying emergent leader is an important issue for organizations. It is also a wellinvestigated topic in social psychology while a relatively new problem in social signal processing (SSP). The effectiveness of nonverbal features have been shown by many previous SSP studies. In general, the nonverbal video-based features were not more effective compared to audio-based features although, their fusion generally improved the overall performance. However, in absence of audio sensors, the accurate detection of social interactions is still crucial. Motivating from that, we propose novel, automatically extracted, nonverbal features to identify the emergent leadership. The extracted nonverbal features were based on automatically estimated visual focus of attention which is based on head pose. The evaluation of the proposed method and the defined features were realized using a new dataset which is firstly introduced in this paper including its design, collection and annotation. The effectiveness of the features and the method were also compared with many state of the art features and methods.
Social interactions are at the core of social life. However, humans selectively choose their exchange partners and do not engage in all available opportunities for social encounters. In this review, we argue that attentional systems play an important role in guiding the selection of social interactions. Supported by both classic and emerging literature, we identify and characterize the three core processes-perception, interpretation, and evaluation-that interact with attentional systems to modulate selective responses to social environments. Perceptual processes facilitate attentional prioritization of social cues. Interpretative processes link attention with understanding of cues' social meanings and agents' mental states. Evaluative processes determine the perceived value of the source of social information. The interplay between attention and these three routes of processing places attention in a powerful role to manage the selection of the vast amount of social information that individuals encounter on a daily basis and, in turn, gate the selection of social interactions.
The ability to attend to someone else's gaze is thought to represent one of the essential building blocks of the human sociocognitive system. This behavior, termed social attention, has traditionally been assessed using laboratory procedures in which participants' response time and/or accuracy performance indexes attentional function. Recently, a parallel body of emerging research has started to examine social attention during real life social interactions using naturalistic and observational methodologies. The main goal of the present work was to begin connecting these two lines of inquiry. To do so, here we operationalized, indexed, and measured the engagement and shifting components of social attention using covert and overt measures. These measures were obtained during an unconstrained real-world social interaction and during a typical laboratory social cuing task. Our results indicated reliable and overall similar indices of social attention engagement and shifting within each task. However, these measures did not relate across the two tasks. We discuss these results as potentially reflecting the differences in social attention mechanisms, the specificity of the cuing task's measurement, as well as possible general dissimilarities with respect to context, task goals, and/or social presence. (PsycINFO Database Record
Recent findings suggest that in dyadic contexts observers rapidly and involuntarily process the visual perspective of others and cannot easily resist interference from their viewpoint. To investigate whether spontaneous perspective taking extends beyond dyads, we employed a novel visual perspective task that required participants to select between multiple competing perspectives. Participants were asked to judge their own perspective or the visual perspective of one or two avatars who either looked at the same objects or looked at different objects. Results indicate that when a single avatar was present in the room, participants processed the irrelevant perspective even when it interfered with participants’ explicit judgments about the relevant perspective. A similar interference effect was observed when two avatars looked at the same discs, but not when they looked at different discs. Indeed, when the two avatars looked at different discs, the interference from the irrelevant perspective was significantly reduced. This is the first evidence that the number and orientation of agents modulate spontaneous perspective taking in non-dyadic contexts: observers may efficiently compute another’s perspective, but in presence of more individuals holding discrepant perspectives, they may not spontaneously track multiple viewpoints. These findings are discussed in relation to the hypothesis that perspective calculation occurs in an effortless and automatic manner.
These findings support the hypothesis that the ability to understand and ascribe mental states is impaired in AUD. Future studies should focus on the relevance of the different ToM impairments as predictors of treatment outcome in alcoholism, and on the possibility that rehabilitative interventions may be diversified according to ToM assessment.
We asked whether previous observations of group interactions modulate subsequent social attention episodes. Participants first completed a learning phase with 2 conditions. In the "leader" condition 1 of 3 identities turned her gaze first, followed by the 2 other faces. In the "follower" condition, 1 of the identities turned her gaze after the 2 other faces had first shifted their gaze. Thus, participants observed that some individuals were consistently leaders and others followers of others' attention. In the test phase, the faces of leaders and followers were presented in a gaze cueing paradigm. Remarkably, the followers did not elicit gaze cueing. Our data demonstrate that individuals who do not guide group attention in exploring the environment are ineffective social attention directors in later encounters. Thus, the role played in previous group social attention interactions modulates the relative weight assigned to others' gaze: we ignore the gaze of group followers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.