Recent studies have shown that cognitive and social interventions are crucial to the overall health of older adults including their psychological, cognitive, and physical well-being. However, due to the rapidly growing elderly population of the world, the resources and people to provide these interventions is lacking. Our work focuses on the use of social robotic technologies to provide person-centered cognitive interventions. In this article, we investigate the acceptance and attitudes of older adults toward the human-like expressive socially assistive robot Brian 2.1 in order to determine if the robot's human-like assistive and social characteristics would promote the use of the robot as a cognitive and social interaction tool to aid with activities of daily living. The results of a robot acceptance questionnaire administered during a robot demonstration session with a group of 46 elderly adults showed that the majority of the individuals had positive attitudes toward the socially assistive robot and its intended applications.
Background
Wearable powered exoskeletons are a new and emerging technology developed to provide sensory-guided motorized lower limb assistance enabling intensive task specific locomotor training utilizing typical lower limb movement patterns for persons with gait impairments. To ensure that devices meet end-user needs it is important to understand and incorporate end-users perspectives, however research in this area is extremely limited in the post-stroke population. The purpose of this study was to explore in-depth, end-users perspectives, persons with stroke and physiotherapists, following a single-use session with a H2 exoskeleton.
Methods
We used a qualitative interpretive description approach utilizing semi-structured face to face interviews, with persons post-stroke and physiotherapists, following a 1.5 h session with a H2 exoskeleton.
Results
Five persons post-stroke and 6 physiotherapists volunteered to participate in the study. Both participant groups provided insightful comments on their experience with the exoskeleton. Four themes were developed from the persons with stroke participant data: (1) Adopting technology; (2) Device concerns; (3) Developing walking ability; and, (4) Integrating exoskeleton use. Five themes were developed from the physiotherapist participant data: (1) Developer-user collaboration; (2) Device specific concerns; (3) Device programming; (4) Patient characteristics requiring consideration; and, (5) Indications for use.
Conclusions
This study provides an interpretive understanding of end-users perspectives, persons with stroke and neurological physiotherapists, following a single-use experience with a H2 exoskeleton. The findings from both stakeholder groups overlap such that four over-arching concepts were identified including: (i) Stakeholder participation; (ii) Augmentation vs. autonomous robot; (iii) Exoskeleton usability; and (iv) Device specific concerns. The end users provided valuable perspectives on the use and design of the H2 exoskeleton, identifying needs specific to post-stroke gait rehabilitation, the need for a robust evidence base, whilst also highlighting that there is significant interest in this technology throughout the continuum of stroke rehabilitation.
In Human-Robot Interactions (HRI), robots should be socially intelligent. They should be able to respond appropriately to human affective and social cues in order to effectively engage in bi-directional communications. Social intelligence would allow a robot to relate to, understand, and interact and share information with people in real-world humancentered environments. This survey paper presents an encompassing review of existing automated affect recognition and classification systems for social robots engaged in various HRI settings. Human-affect detection from facial expressions, body language, voice, and physiological signals are investigated, as well as from a combination of the aforementioned modes. The automated systems are described by their corresponding robotic and HRI applications, the sensors they employ, and the feature detection techniques and affect classification strategies utilized. This paper also discusses pertinent future research directions for promoting the development of socially intelligent robots capable of recognizing, classifying and responding to human affective states during real-time HRI.
Socially assistive robots can autonomously provide activity assistance to vulnerable populations, including those living with cognitive impairments. To provide effective assistance, these robots should be capable of displaying appropriate behaviors and personalizing them to a user's cognitive abilities. Our research focuses on the development of a novel robot learning architecture that uniquely combines learning from demonstration (LfD) and reinforcement learning (RL) algorithms to effectively teach socially assistive robots personalized behaviors. Caregivers can demonstrate a series of assistive behaviors for an activity to the robot, which it uses to learn general behaviors via LfD. This information is used to obtain initial assistive state-behavior pairings using a decision tree. Then, the robot uses an RL algorithm to obtain a policy for selecting the appropriate behavior personalized to the user's cognition level. Experiments were conducted with the socially assistive robot Casper to investigate the effectiveness of our proposed learning architecture. Results showed that Casper was able to learn personalized behaviors for the new assistive activity of tea-making, and that combining LfD and RL algorithms significantly reduces the time required for a robot to learn a new activity.15:2 C. Moro et al. programs [5], and providing social therapy to autistic children [6]. The behaviors of socially assistive robots have traditionally been designed using one of three methods: (1) manually hand-crafting combinations of speech, gestures, and other communication modes necessary to display a behavior [7-10]; (2) teaching a robot multimodal behaviors through learning from demonstration (LfD) [11,12]; or (3) autonomously learning multimodal behaviors via reinforcement learning (RL) algorithms [13,14]. Manually preprogramming robot behaviors involves tedious annotation, without the potential for expanding the robot's skillset once the robot is deployed in an environment. LfD and RL algorithms allow robots to learn behaviors without having to preprogram them. However, they may require large numbers of interactions with demonstrators (e.g., LfD) or intended users (e.g., RL) for training purposes, which may not always be available or feasible. With respect to the latter, it is not always safe for vulnerable users to engage with a robot that has not been fully trained. In addition to learning general assistive behaviors, socially assistive robots may also have to adapt their behaviors to their specific users, as behavior personalization can positively affect robot acceptance [5, 7] and increase its use over time [7].Only a handful of work has focused on personalizing assistive robot behaviors to user profiles [5,[15][16][17]. Behaviors have been personalized to either a general user group, for example, extroverted versus introverted users [5], or to a user state during an activity, such as stress level during a memory game [15]. Personalization of assistive robot behaviors to a single user's cognitive model has yet to be inves...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.