Effective human–robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Multiple dispositional factors that may influence the dominant mental model were assessed. These included the Robot Threat Assessment (RoTA), which measures the person’s propensity to apply tool and teammate models in security contexts. Participants (N = 118) were paired with an intelligent robot tasked with making threat assessments in an urban setting. A transparency manipulation was used to influence the dominant mental model. For half of the participants, threat assessment was described as physics-based (e.g., weapons sensed by sensors); the remainder received transparency information that described psychological cues (e.g., facial expression). We expected that the physics-based transparency messages would guide the participant toward treating the robot as an advanced machine (advanced tool mental model activation), while psychological messaging would encourage perceptions of the robot as acting like a human partner (teammate mental model). We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases toward intelligent machines.
Emotional processing interventions for trauma and psychological conflicts are underutilized. Lack of adequate training in emotional processing techniques and therapists’ lack of confidence in utilizing such interventions are barriers to implementation. We developed and tested an experiential training to improve trainees’ performance in a set of transtheoretical emotional processing skills: eliciting patient disclosure of difficult experiences, responding to defenses against disclosure, and eliciting adaptive emotions. Mental health trainees (N = 102) were randomized to experiential or standard training, both of which presented a 1-hr individual session administered remotely. Before and after training and at 5-week follow-up, trainees were videorecorded as they responded to videos of challenging therapy situations, and responses were coded for demonstrated skill. Trainees also completed measures of therapeutic self-efficacy, anxiety, and depression at baseline and follow-up. Repeated-measures analysis of variance indicated all three skills increased from pre- to posttraining for both conditions, which were maintained at follow-up. Importantly, experiential training led to greater improvements than standard training in the skills of eliciting disclosure (η2 = .05, p = .03), responding to defenses (η2 = .04, p = .05), and encouraging adaptive emotions (η2 = .23, p < .001) at posttraining, and the training benefits for eliciting disclosure were maintained at follow-up. Both conditions led to improved self-efficacy. Trainees’ anxiety decreased in the standard training, but not in the experiential. One session of experiential training improved trainees’ emotional processing therapy skills more than didactic training, although more training and practice likely are needed to yield longer lasting skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.