How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction and is predictive of whether the partners stay together in the long term. Hence understanding the emotions of each partner is important. Yet current approaches that are used include self-reports which are burdensome and hence limit the frequency of this data collection. Automatic emotion prediction could address this challenge. Insights from psychology research indicate that partners' behaviors influence each other's emotions in conflict interaction and hence, the behavior of both partners could be considered to better predict each partner's emotion. However, it is yet to be investigated how doing so compares to only using each partner's own behavior in terms of emotion prediction performance. In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples (N = 736 individuals) who were videotaped during an 8-minutes conflict interaction in the laboratory. Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction. Our results show that including the behavior of the other partner improves the prediction performance. Furthermore, for men, considering how their female partners spoke is most important and for women considering what their male partner said is most important in getting better prediction performance. This work is a step towards automatically recognizing each partners' emotion based on the behavior of both, which would enable a better understanding of couples in research, therapy, and the real world.
Extensive couples' literature shows that how couples feel after a conflict is predicted by certain emotional aspects of that conversation. Understanding the emotions of couples leads to a better understanding of partners' mental well-being and consequently their relationships. Hence, automatic emotion recognition among couples could potentially guide interventions to help couples improve their emotional well-being and their relationships. It has been shown that people's global emotional judgment after an experience is strongly influenced by the emotional extremes and ending of that experience, known as the peak-end rule. In this work, we leveraged this theory and used machine learning to investigate, which audio segments can be used to best predict the end-of-conversation emotions of couples. We used speech data collected from 101 Dutchspeaking couples in Belgium who engaged in 10-minute long conversations in the lab. We extracted acoustic features from (1) the audio segments with the most extreme positive and negative ratings, and (2) the ending of the audio. We used transfer learning in which we extracted these acoustic features with a pre-trained convolutional neural network (YAMNet). We then used these features to train machine learning models -support vector machines -to predict the end-of-conversation valence ratings (positive vs negative) of each partner. The results of this work could inform how to best recognize the emotions of couples after conversationsessions and eventually, lead to a better understanding of couples' relationships either in therapy or in everyday life.
Many processes in psychology are complex, such as dyadic interactions between two interacting partners (e.g., patient-therapist, intimate relationship partners). Nevertheless, many basic questions about interactions are difficult to investigate because dyadic processes can be within a person and between partners, they are based on multimodal aspects of behavior and unfold rapidly. Current analyses are mainly based on the behavioral coding method, whereby human coders annotate behavior based on a coding schema. But coding is labor-intensive, expensive, slow, focuses on few modalities, and produces sparse data which has forced the field to use average behaviors across entire interactions, thereby undermining the ability to study processes on a fine-grained scale. Current approaches in psychology use LIWC for analyzing couples' interactions. However, advances in natural language processing such as BERT could enable the development of systems to potentially automate behavioral coding, which in turn could substantially improve psychological research. In this work, we train machine learning models to automatically predict positive and negative communication behavioral codes of 368 German-speaking Swiss couples during an 8-minute conflict interaction on a fine-grained scale (10-seconds sequences) using linguistic features and paralinguistic features derived with openSMILE. Our results show that both simpler TF-IDF features as well as more complex BERT features performed better
Mobile health (mHealth) interventions hold the promise of augmenting existing health promotion interventions. Older adults present unique challenges in advancing new models of health promotion using technology including sensory limitations and less experience with mHealth, underscoring the need for specialized usability testing. We use an open-source mHealth device as a case example for its integration in a newly designed health services intervention. We performed a convergent, parallel mixed-methods study including semi-structured interviews, focus groups, and questionnaires, using purposive sampling of 29 older adults, 4 community leaders and 7 clinicians in a rural setting We transcribed the data, developed codes informed by thematic analysis using inductive and deductive methods, and assessed the quantitative data using descriptive statistics. Our results suggest the importance of end-users in user-centered design of mHealth devices and that aesthetics are critically important. The prototype could potentially be feasibly integrated within health behavior interventions. Centralized dashboards were desired by all participants and ecological momentary assessment could be an important part of monitoring. Concerns of mHealth, including the prototype device, include the device’s accuracy, its intrusiveness in daily life and privacy. Formative evaluations are critically important prior to deploying large-scale interventions.
BackgroundType II diabetes mellitus (T2DM) is a common chronic disease. To manage blood glucose levels, patients need to follow medical recommendations for healthy eating, physical activity, and medication adherence in their everyday life. Illness management is mainly shared with partners and involves social support and common dyadic coping (CDC). Social support and CDC have been identified as having implications for people’s health behavior and well-being. Visible support, however, may also be negatively related to people’s well-being. Thus, the concept of invisible support was introduced. It is unknown which of these concepts (ie, visible support, invisible support, and CDC) displays the most beneficial associations with health behavior and well-being when considered together in the context of illness management in couple’s everyday life. Therefore, a novel ambulatory assessment application for the open-source behavioral intervention platform MobileCoach (AAMC) was developed. It uses objective sensor data in combination with self-reports in couple’s everyday life.ObjectiveThe aim of this paper is to describe the design of the Dyadic Management of Diabetes (DyMand) study, funded by the Swiss National Science Foundation (CR12I1_166348/1). The study was approved by the cantonal ethics committee of the Canton of Zurich, Switzerland (Req-2017_00430).MethodsThis study follows an intensive longitudinal design with 2 phases of data collection. The first phase is a naturalistic observation phase of couples’ conversations in combination with experience sampling in their daily lives, with plans to follow 180 T2DM patients and their partners using sensor data from smartwatches, mobile phones, and accelerometers for 7 consecutive days. The second phase is an observational study in the laboratory, where couples discuss topics related to their diabetes management. The second phase complements the first phase by focusing on the assessment of a full discussion about diabetes-related concerns. Participants are heterosexual couples with 1 partner having a diagnosis of T2DM.ResultsThe AAMC was designed and built until the end of 2018 and internally tested in March 2019. In May 2019, the enrollment of the pilot phase began. The data collection of the DyMand study will begin in September 2019, and analysis and presentation of results will be available in 2021.ConclusionsFor further research and practice, it is crucial to identify the impact of social support and CDC on couples’ dyadic management of T2DM and their well-being in daily life. Using AAMC will make a key contribution with regard to objective operationalizations of visible and invisible support, CDC, physical activity, and well-being. Findings will provide a sound basis for theory- and evidence-based development of dyadic interventions to change health behavior in the context of couple’s dyadic illness management. Challenges to this multimodal sensor approach and its feasibility aspects are discussed.International Registered Report Identifier (IRRID)PRR1-10.2196/13685
Physical activity helps reduce the risk of cardiovascular disease, hypertension and obesity. The ability to monitor a person’s daily activity level can inform self-management of physical activity and related interventions. For older adults with obesity, the importance of regular, physical activity is critical to reduce the risk of long-term disability. In this work, we present ActivityAware, an application on the Amulet wrist-worn device that measures daily activity levels (sedentary, moderate and vigorous) of individuals, continuously and in real-time. The app implements an activity-level detection model, continuously collects acceleration data on the Amulet, classifies the current activity level, updates the day’s accumulated time spent at that activity level, logs the data for later analysis, and displays the results on the screen. We developed an activity-level detection model using a Support Vector Machine (SVM). We trained our classifiers using data from a user study, where subjects performed the following physical activities: sit, stand, lay down, walk and run. With 10-fold cross validation and leave-one-subject-out (LOSO) cross validation, we obtained preliminary results that suggest accuracies up to 98%, for n=14 subjects. Testing the ActivityAware app revealed a projected battery life of up to 4 weeks before needing to recharge. The results are promising, indicating that the app may be used for activity-level monitoring, and eventually for the development of interventions that could improve the health of individuals.
Recognizing the emotions of the elderly is important as it could give an insight into their mental health. Emotion recognition systems that work well on the elderly could be used to assess their emotions in places such as nursing homes and could inform the development of various activities and interventions to improve their mental health. However, several emotion recognition systems are developed using data from younger adults. In this work, we train machine learning models to recognize the emotions of elderly individuals via performing a 3-class classification of valence and arousal as part of the INTERSPEECH 2020 Computational Paralinguistics Challenge (COMPARE). We used speech data from 87 participants who gave spontaneous personal narratives. We leveraged a transfer learning approach in which we used pretrained CNN and BERT models to extract acoustic and linguistic features respectively and fed them into separate machine learning models. Also, we fused these two modalities in a multimodal approach. Our best model used a linguistic approach and outperformed the official competition of unweighted average recall (UAR) baseline for valence by 8.8% and the mean of valence and arousal by 3.2%. We also showed that feature engineering is not necessary as transfer learning without fine-tuning performs as well or better and could be leveraged for the task of recognizing the emotions of elderly individuals. This work is a step towards better recognition of the emotions of the elderly which could eventually inform the development of interventions to manage their mental health. CCS CONCEPTS• Applied computing → Psychology.
Couples' relationships affect the physical health and emotional well-being of partners. Automatically recognizing each partner's emotions could give a better understanding of their individual emotional well-being, enable interventions and provide clinical benefits.In the paper, we summarize and synthesize works that have focused on developing and evaluating systems to automatically recognize the emotions of each partner based on couples' interaction or conversation contexts. We identified 28 articles from IEEE, ACM, Web of Science, and Google Scholar that were published between 2010 and 2021. We detail the datasets, features, algorithms, evaluation, and results of each work as well as present main themes. We also discuss current challenges, research gaps and propose future research directions. In summary, most works have used audio data collected from the lab with annotations done by external experts and used supervised machine learning approaches for binary classification of positive and negative affect. Performance results leave room for improvement with significant research gaps such as no recognition using data from daily life. This survey will enable new researchers to get an overview of this field and eventually enable the development of emotion recognition systems to inform interventions to improve the emotional well-being of couples.CCS Concepts: • General and reference → Surveys and overviews; • Human-centered computing → Human computer interaction (HCI); • Applied computing → Psychology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.