Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Background To support a victim of violence and establish the correct penalty for the perpetrator, it is crucial to correctly evaluate and communicate the severity of the violence. Recent data have shown these communications to be biased. However, computational language models provide opportunities for automated evaluation of the severity to mitigate the biases. Objective We investigated whether these biases can be removed with computational algorithms trained to measure the severity of violence described. Methods In phase 1 (P1), participants (N=71) were instructed to write some text and type 5 keywords describing an event where they experienced physical violence and 1 keyword describing an event where they experienced psychological violence in an intimate partner relationship. They were also asked to rate the severity. In phase 2 (P2), another set of participants (N=40) read the texts and rated them for severity of violence on the same scale as in P1. We also quantified the text data to word embeddings. Machine learning was used to train a model to predict the severity ratings. Results For physical violence, there was a greater accuracy bias for humans (r2=0.22) compared to the computational model (r2=0.31; t38=–2.37, P=.023). For psychological violence, the accuracy bias was greater for humans (r2=0.058) than for the computational model (r2=0.35; t38=–14.58, P<.001). Participants in P1 experienced psychological violence as more severe (mean 6.46, SD 1.69) than participants rating the same events in P2 (mean 5.84, SD 2.80; t86=–2.22, P=.029<.05), whereas no calibration bias was found for the computational model (t134=1.30, P=.195). However, no calibration bias was found for physical violence for humans between P1 (mean 6.59, SD 1.81) and P2 (mean 7.54, SD 2.62; t86=1.32, P=.19) or for the computational model (t134=0.62, P=.534). There was no difference in the severity ratings between psychological and physical violence in P1. However, the bias (ie, the ratings in P2 minus the ratings in P1) was highly negatively correlated with the severity ratings in P1 (r2=0.29) and in P2 (r2=0.37), whereas the ratings in P1 and P2 were somewhat less correlated (r2=0.11) using the psychological and physical data combined. Conclusions The results show that the computational model mitigates accuracy bias and removes calibration biases. These results suggest that computational models can be used for debiasing the severity evaluations of violence. These findings may have application in a legal context, prioritizing resources in society and how violent events are presented in the media.
Background To support a victim of violence and establish the correct penalty for the perpetrator, it is crucial to correctly evaluate and communicate the severity of the violence. Recent data have shown these communications to be biased. However, computational language models provide opportunities for automated evaluation of the severity to mitigate the biases. Objective We investigated whether these biases can be removed with computational algorithms trained to measure the severity of violence described. Methods In phase 1 (P1), participants (N=71) were instructed to write some text and type 5 keywords describing an event where they experienced physical violence and 1 keyword describing an event where they experienced psychological violence in an intimate partner relationship. They were also asked to rate the severity. In phase 2 (P2), another set of participants (N=40) read the texts and rated them for severity of violence on the same scale as in P1. We also quantified the text data to word embeddings. Machine learning was used to train a model to predict the severity ratings. Results For physical violence, there was a greater accuracy bias for humans (r2=0.22) compared to the computational model (r2=0.31; t38=–2.37, P=.023). For psychological violence, the accuracy bias was greater for humans (r2=0.058) than for the computational model (r2=0.35; t38=–14.58, P<.001). Participants in P1 experienced psychological violence as more severe (mean 6.46, SD 1.69) than participants rating the same events in P2 (mean 5.84, SD 2.80; t86=–2.22, P=.029<.05), whereas no calibration bias was found for the computational model (t134=1.30, P=.195). However, no calibration bias was found for physical violence for humans between P1 (mean 6.59, SD 1.81) and P2 (mean 7.54, SD 2.62; t86=1.32, P=.19) or for the computational model (t134=0.62, P=.534). There was no difference in the severity ratings between psychological and physical violence in P1. However, the bias (ie, the ratings in P2 minus the ratings in P1) was highly negatively correlated with the severity ratings in P1 (r2=0.29) and in P2 (r2=0.37), whereas the ratings in P1 and P2 were somewhat less correlated (r2=0.11) using the psychological and physical data combined. Conclusions The results show that the computational model mitigates accuracy bias and removes calibration biases. These results suggest that computational models can be used for debiasing the severity evaluations of violence. These findings may have application in a legal context, prioritizing resources in society and how violent events are presented in the media.
BACKGROUND To accurately communicate the severity of self-experienced violence to a person not present at the event is crucial for proper treatment of the victim and a reasonable penalty of the perpetrator. Recent data shows that humans have biases in this communication, where the severity of psychological violence is underestimated and physical violence is overestimated by the persons reading texts compared to the writers experiencing the interpersonal violence. Furthermore, the severity of psychological violence is less accurately communicated than physical violence. Recent advances in computational language models provide opportunities for automated evaluation of the severity of narrations of violence. OBJECTIVE We investigate whether these biases can be removed with computational algorithms trained to measure the severity of violence. METHODS The data analyzed in this study is taken from Sikström et al (2021) and were collected in two phases using the Prolific Academic website for online recruiting. The aim of this study was to investigate whether a computerized language model could remove the biases in communication of severity of violence found in Sikström et al (2021). This was accomplished by first quantifying the text data to word embedding, i.e. a vector describing the meaning of a text, and then using machine learning to map the embedding to a scale of severity of violence. RESULTS The results show that the computational model mitigates the accuracy bias and removes the calibration biases. CONCLUSIONS Our results suggest that computational models can be used for debiasing severity evaluations of violence. These findings may have application in legal context, prioritizing of resources in the society and how violent events are presented in media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.