Due to serious challenges in the healthcare sector, high expectations are placed on the use of assistive robotics. However, only a few systems are currently commercially available. Key challenges in the automation of care activities concern the identification and robust mediation of medical and nursing standards as well as the distribution of agency between caregivers, robots, and patients. With regard to successful mediation of this relational framework, this research aims to identify (1) prerequisites for the implementation and use of robots, (2) potential areas of application as well as ethical considerations, and, finally, (3) requirements for the design of human–robot interactions (HRI) within inpatient elderly care settings. Using a qualitative research approach with semi-structured interviews, a total of 19 health professionals were interviewed in two constitutive studies. The results illustrate that robotic assistance is expected to provide potential relief in various application areas. At the same time, there was a great need for measures that support professionals in their responsibility for the care process and consider the professional values of care in the interpersonal relationship. To ensure high acceptance and use of robotics in care, its capabilities, role models, and agency must be increasingly aligned to professional standards and values.
Robots are increasingly used in healthcare to support caregivers in their daily work routines. To ensure an effortless and easy interaction between caregivers and robots, communication via natural language is expected from robots. However, robotic speech bears a large potential for technical failures, which includes processing and communication failures. It is therefore necessary to investigate how caregivers perceive and respond to robots with erroneous communication. We recruited thirty caregivers, who interacted in a virtual reality setting with a robot. It was investigated whether different kinds of failures are more likely to be forgiven with technical or human-like justifications. Furthermore, we determined how tolerant caregivers are with a robot constantly returning a process failure and whether this depends on the robot’s response pattern (constant vs. variable). Participants showed the same forgiveness towards the two justifications. However, females liked the human-like justification more and males liked the technical justification more. Providing justifications with any reasonable content seems sufficient to achieve positive effects. Robots with a constant response pattern were liked more, although both patterns achieved the same tolerance threshold from caregivers, which was around seven failed requests. Due to the experimental setup, the tolerance for communication failures was probably increased and should be adjusted in real-life situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.