Previous research has shown that simulated child sexual abuse (CSA) interview training using avatars paired with feedback and modeling improves interview quality. However, to make this approach scalable, the classification of interviewer questions needs to be automated. We tested an automated question classification system for these avatar interviews while also providing automated interventions (feedback and modeling) to improve interview quality. Forty-two professionals conducted two simulated CSA interviews online and were randomly provided with no intervention, feedback, or modeling after the first interview. Feedback consisted of the outcome of the alleged case and comments on the quality of the interviewer’s questions. Modeling consisted of learning points and videos illustrating good and bad questioning methods. The total percentage of agreement in question coding between human operators and the automated classification was 72% for the main categories (recommended vs. not recommended) and 52% when 11 subcategories were considered. The intervention groups improved from first to second interview while this was not the case in the no intervention group (intervention x time: p = 0.007, ηp2 = 0.28). Automated question classification worked well for classifying the interviewers’ questions allowing interventions to improve interview quality.
Background: Previous research has shown that simulated child sexual abuse interview training using avatars paired with feedback and modeling improves interview quality. However, in order to make this approach scalable, the classification of interviewer questions needs to be automated.Objective: We tested an automated question classification system for simulated investigative interviews with children while also providing interventions (feedback and modeling) aimed at improving interview quality. Participants and Setting: Forty-two professionals were randomly assigned to the no intervention, feedback, or modeling group. Methods: The participants conducted two simulated child sexual abuse interviews online while receiving no intervention, feedback or modeling after the first interview. Feedback consisted of the outcome of the alleged case and comments on the quality of the questions asked by the interviewer. Modeling consisted of learning points and videos illustrating good and bad questioning methods.Results: The total percentage of agreement in question coding between human operators and the automated classification was 72% for the main categories (recommended vs. not recommended) and 52% when the eleven subcategories were considered. The intervention groups improved from first to second interview while this was not the case in the no intervention group (intervention x time: p = .007, ηp2 = .28).Conclusions: Automated question classification worked well for classifying the interviewers’ questions allowing interventions to improve the quality of the interviews.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.