Abstract-Identifying emergent leaders in organizations is a key issue in organizational behavioral research, and a new problem in social computing. This paper presents an analysis on how an emergent leader is perceived in newly formed, small groups, and then tackles the task of automatically inferring emergent leaders, using a variety of communicative nonverbal cues extracted from audio and video channels. The inference task uses rule-based and collective classification approaches with the combination of acoustic and visual features extracted from a new small group corpus specifically collected to analyze the emergent leadership phenomenon. Our results show that the emergent leader is perceived by his/her peers as an active and dominant person; that visual information augments acoustic information; and that adding relational information to the nonverbal cues improves the inference of each participant's leadership rankings in the group.
In this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA AudioVisual Synchronized corpus (ELEA AVS) was collected using a light portable setup and contains recordings of small group meetings. The participants in each group performed the winter survival task and filled in questionnaires related to personality and several social concepts such as leadership and dominance. In addition, the corpus includes annotations on participants' performance in the survival task, and also annotations of social concepts from external viewers. Based on this corpus, we present the feasibility of predicting the emergent leader in small groups using automatically extracted audio and visual features, based on speaking turns and visual attention, and we focus specifically on multimodal features that make use of the looking at participants while speaking and looking at while not speaking measures. Our findings indicate that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance.
This paper addresses the task of mining typical behavioral patterns from small group face-to-face interactions and linking them to social-psychological group variables. Towards this goal, we define group speaking and looking cues by aggregating automatically extracted cues at the individual and dyadic levels. Then, we define a bag of nonverbal patterns (Bag-of-NVPs) to discretize the group cues. The topics learnt using the Latent Dirichlet Allocation (LDA) topic model are then interpreted by studying the correlations with group variables such as group composition, group interpersonal perception, and group performance. Our results show that both group behavior cues and topics have significant correlations with (and predictive information for) all the above variables. For our study, we use interactions with unacquainted members i.e. newly formed groups.
Abstract. Current ubiquitous computing applications for smart homes aim to enhance people's daily living respecting age span. Among the target groups of people, elderly are a population eager for "choices for living arrangements", which would allow them to continue living in their homes but at the same time provide the health care they need. Given the growing elderly population, there is a need for statistical models able to capture the recurring patterns of daily activity life and reason based on this information. We present an analysis of real-life sensor data collected from 40 different households of elderly people, using motion, door and pressure sensors. Our objective is to automatically observe and model the daily behavior of the elderly and detect anomalies that could occur in the sensor data. For this purpose, we first introduce an abstraction layer to create a common ground for home sensor configurations. Next, we build a probabilistic spatio-temporal model to summarize daily behavior. Anomalies are then defined as significant changes from the learned behavioral model and detected using a cross-entropy measure. We have compared the detected anomalies with manually collected annotations and the results show that the presented approach is able to detect significant behavioral changes of the elderly.
This paper addresses firstly an analysis on how an emergent leader is perceived in newly formed small-groups, and secondly, explore correlations between perception of leadership and automatically extracted nonverbal communicative cues. We hypothesize that the difference in individual nonverbal features between emergent leaders and non-emergent leaders is significant and measurable using speech activity. Our results on a new interaction corpus show that such an approach is promising, identifying the emergent leader with an accuracy of up to 80%.
Conversational social video is becoming a worldwide trend. Video communication allows a more natural interaction, when aiming to share personal news, ideas, and opinions, by transmitting both verbal content and nonverbal behavior. However, the automatic analysis of natural mood is challenging, since it is displayed in parallel via voice, face, and body. This paper presents an automatic approach to infer 11 natural mood categories in conversational social video using single and multimodal nonverbal cues extracted from video blogs (vlogs) from YouTube. The mood labels used in our work were collected via crowdsourcing. Our approach is promising for several of the studied mood categories. Our study demonstrates that although multimodal features perform better than single channel features, not always all the available channels are needed to accurately discriminate mood in videos.
This paper addresses two problems: Firstly, the problem of classifying remote and collocated small-group working meetings, and secondly, the problem of identifying the remote participant, using in both cases nonverbal behavioral cues. Such classifiers can be used to improve the design of remote collaboration technologies to make remote interactions as effective as possible to collocated interactions. We hypothesize that the difference in the dynamics between collocated and remote meetings is significant and measurable using speech activity based nonverbal cues. Our results on a publicly available dataset -the Augmented Multi-Party Interaction with Distance Access (AMIDA) corpus -show that such an approach is promising, although more controlled settings and more data are needed to explore the addressed problems further.
Leaders stand out for what they say and how they say it. This work describes the impact of the language style of emergent leaders in small group discussions based on 7 hours of audio from English spoken discussions recorded with a ubiquitous platform. For the language style analysis, word categories are extracted from manual transcriptions of the discussions as well as from automatically detected keywords. The most relevant word categories are then used to predict the emergent leader in each group. Our findings reveal that non-privacy sensitive word categories like amount of words, conjunctions and assent are good predictors of emergent leadership. The emergent leader can be correctly inferred in a fully automatic approach with up to 82% accuracy using categories derived from keywords, and up to 86% using categories derived from full manual transcriptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.