Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This article proposes a new sonification framework, Go-with-the-Flow, informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. Home studies, a focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life. We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation
This paper presents a multimodal system for real-time analysis of nonverbal affective social interaction in small groups of users. The focus is on two major aspects of affective social interaction: the synchronization of the affective behavior within a small group and the emergence of functional roles, such as leadership. A small group of users is modeled as a complex system consisting of single interacting components that can auto-organize and show global properties. Techniques are developed for computing quantitative measures of both synchronization and leadership. Music is selected as experimental test-bed since it is a clear example of interactive and social activity, where affective nonverbal communication plays a fundamental role. The system has been implemented as software modules for the EyesWeb XMI platform (www.eyesweb.org). It has been used in experimental frameworks (a violin duo and a string quartet) and in real-world applications (in user-centric applications for active music listening). Further application scenarios include entertainment, edutainment, therapy and rehabilitation, cultural heritage and museum applications. Research has been carried out in the framework of the EU-ICT FP7 Project SAME (www.sameproject.eu)
The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.
EyesWeb XMI (for eXtended Multimodal Interaction) is the new version of the well-known EyesWeb platform. It has a main focus on multimodality and the main design target of this new release has been to improve the ability to process and correlate several streams of data. It has been used extensively to build a set of interactive systems for performing arts applications for Festival della Scienza 2006, Genoa, Italy. The purpose of this paper is to describe the developed installations as well as the new EyesWeb features that helped in their development.
KeywordsEyesWeb, multimodal interactive systems, performing arts.
This paper presents GAME-ON (Group Analysis of Multimodal Expression of cohesiON), a multimodal dataset specifically designed for studying group cohesion and for explicitly controlling its variation over time. Cohesion is here addressed according to the Severt's theoretical multidimensional integrative framework. More specifically, GAME-ON focuses on the social and task dimensions of the instrumental function of cohesion. The dataset consists of over 11 hours of synchronized multimodal recordings (audio, video, and motion capture data) of 17 small groups (3 persons) playing a social game, i.e., an escape game. The game consists of several tasks designed to manipulate the variation of cohesion over time. GAME-ON includes annotations consisting of self-assessment of cohesion and other constructs such as emotions, leadership, and warmth and competence. A first statistical analysis of these annotations shows that we successfully manipulated all the relative variations of cohesion (between tasks) over time. This holds for all tasks except for one where we observed a significant variation of cohesion in the opposite direction than expected. The dataset will be publicly available for research purposes. The motivation of our work is to provide the scientific community with an asset for studying cohesion and other group phenomena.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.