2020
DOI: 10.1080/10503307.2020.1839141
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal affect analysis of psychodynamic play therapy

Abstract: Objective: We explore state of the art machine learning based tools for automatic facial and linguistic affect analysis to allow easier, faster, and more precise quantification and annotation of children's verbal and non-verbal affective expressions in psychodynamic child psychotherapy. Method: The sample included 53 Turkish children: 41 with internalizing, externalizing and comorbid problems; 12 in the non-clinical range. We collected audio and video recordings of 148 sessions, which were manually transcribed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 79 publications
(88 reference statements)
0
9
0
Order By: Relevance
“…Halfon, Cavdar, et al (2020), Halfon, Ozsoy, et al (2020) and Halfon and Besiroglu (2020b) are addressing the Turkish adaptations of Health of the Nation Outcome Scales for Children and Adolescents, the Therapy Process Observational Coding System—Alliance Scale and the Reflective Function Coding on the Parent Development Interview, consecutively, using baseline scales of some of the same children in this study. Halfon, Doyran et al (2020) employ machine learning procedures to predict affect scores from 53 children and 148 sessions.The submitted manuscript is different from other manuscripts in that it assesses psychodynamic technique and therapeutic alliance in the same model using longitudinal measures of problem behaviors, which has not been investigated previously.…”
mentioning
confidence: 99%
“…Halfon, Cavdar, et al (2020), Halfon, Ozsoy, et al (2020) and Halfon and Besiroglu (2020b) are addressing the Turkish adaptations of Health of the Nation Outcome Scales for Children and Adolescents, the Therapy Process Observational Coding System—Alliance Scale and the Reflective Function Coding on the Parent Development Interview, consecutively, using baseline scales of some of the same children in this study. Halfon, Doyran et al (2020) employ machine learning procedures to predict affect scores from 53 children and 148 sessions.The submitted manuscript is different from other manuscripts in that it assesses psychodynamic technique and therapeutic alliance in the same model using longitudinal measures of problem behaviors, which has not been investigated previously.…”
mentioning
confidence: 99%
“…Some studies suggest agents responding to user's feelings reduce user's frustration [13], [14]. Hoque et al [15] propose a model to distinguish frustration and delight, Ishimaru et al [16] propose a learning assistant that gives feedback based on self-confidence detection, Halfon et al [17] present a tool to analyze anger, anxiety, pleasure, and sadness in psychotherapy. There have been some studies about understanding learner's states including confusion as a cognitive-affective state [18] and integration with an affect-sensitive tutor [19].…”
Section: Related Workmentioning
confidence: 99%
“…Gesture recognition and speech recognition are common areas of research and have become important components of perceptual user interfaces in accomplishing intelligent human-computer interaction functions that are highly structured, which makes gestures and speech an essential interaction modality in the field of human-computer interaction ( Pandeya and Lee, 2021 ). In human-computer interaction, it is found that gesture recognition is not only affected by different contexts, multiple interpretations, and spatial and temporal changes, but also has unsolved problems until now due to the complex non-rigid nature of the human hand, while speech recognition is more susceptible to the influence of surrounding environmental factors and human factors, and has great challenges ( Halfon et al, 2021 ). However, it is found that gesture recognition, speech recognition, and sensor sensing can complement each other, and the use of multi-modal interaction can better reduce the user’s operational burden and improve the efficiency of interaction.…”
Section: Current Status Of Researchmentioning
confidence: 99%