SUMMARYChanges in world-wide population trends have provided new demands for new technologies in areas such as care and rehabilitation. Recent developments in the the field of robotics for neurorehabilitation have shown a range of evidence regarding usefulness of these technologies as a tool to augment traditional physiotherapy. Part of the appeal for these technologies is the possibility to place a rehabilitative tool in one's home, providing a chance for more frequent and accessible technologies for empowering individuals to be in charge of their therapy.Objective: this manuscript introduces the Supervised Care and Rehabilitation Involving Personal Tele-robotics (SCRIPT) project. The main goal is to demonstrate design and development steps involved in a complex intervention, while examining feasibility of using an instrumented orthotic device for home-based rehabilitation after stroke.Methods: the project uses a user-centred design methodology to develop a hand/wrist rehabilitation device for home-based therapy after stroke. The patient benefits from a dedicated user interface that allows them to receive feedback on exercise as well as communicating with the health-care professional. The health-care professional is able to use a dedicated interface to send/receive communications and remote-manage patient's exercise routine using provided performance benchmarks. Patients were involved in a feasibility study (n=23) and were instructed to use the device and its interactive games for 180 min per week, around 30 min per day, for a period of 6 weeks, with a 2-months follow up. At the time of this study, only 12 of these patients have finished their 6 weeks trial plus 2 months follow up evaluation.Results: with the "use feasibility" as objective, our results indicate 2 patients dropping out due to technical difficulty or lack of personal interests to continue. Our frequency of use results indicate that on average, patients used the SCRIPT1 device around 14 min of self-administered therapy a day. The group average for the system usability scale was around 69% supporting system usability.Conclusions: based on the preliminary results, it is evident that stroke patients were able to use the system in their homes. An average of 14 min a day engagement mediated via three interactive games is promising, given the chronic stage of stroke. During the 2nd year of the project, 6 additional games with more functional relevance in their interaction have been designed to allow for a more variant 1332Supervised care and rehabilitation involving personal tele-robotics context for interaction with the system, thus hoping to positively influence the exercise duration. The system usability was tested and provided supporting evidence for this parameter. Additional improvements to the system are planned based on formative feedback throughout the project and during the evaluations. These include a new orthosis that allows a more active control of the amount of assistance and resistance provided, thus aiming to provide a more challenging ...
Currently, the changes in functional capacity and performance of stroke patients after returning home from a rehabilitation hospital is unknown to a physician, having no objective information about the intensity and quality of a patient’s daily-life activities. Therefore, there is a need to develop and validate an unobtrusive and modular system for objectively monitoring the stroke patient’s upper and lower extremity motor function in daily-life activities and in home training. This is the main goal of the European FP7 project named “INTERACTION”. A complete full body sensing system is developed, whicj integrates Inertial Measurement Units (IMU), Knitted Piezoresistive Fabric (KPF) strain sensors, KPF goniometers, EMG electrodes and force sensors into a modular sensor suit designed for stroke patients. In this paper, we describe the complete INTERACTION sensor system. Data from the sensors are captured wirelessly by a software application and stored in a remote secure database for later access and processing via portal technology. Data processing includes a 3D full body reconstruction by means of the Xsens MoCap Engine, providing position and orientation of each body segment (poses). In collaboration with clinicians and engineers, clinical assessment measures were defined and the question of how to present the data on the web portal was addressed. The complete sensing system is fully implemented and is currently being validated. Patients measurements start in June 2014
Abstract. Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant's attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.
Abstract--In this paper we present the results of a pilot study investigating the effects of agents' gender-ambiguous vs. gendermarked look on the perceived interaction quality of a multimodal question answering system. Eight test subjects interacted with three system agents, each having a feminine, masculine or gender-ambiguous look. The subjects were told each agent was representing a differently configured system. In fact, they were interacting with the same system. In the end, the subjects filled in an evaluation questionnaire and participated in an in-depth qualitative interview. The results showed that the user evaluation seemed to be influenced by the agent's gender look: the system represented by the feminine agent achieved on average the highest evaluation scores. On the other hand, the system represented by the gender-ambiguous agent was systematically lower rated. This outcome might be relevant for an appropriate agent look, especially since many designers tend to develop gender-ambiguous characters for interactive interfaces to match various users' preferences. However, additional empirical evidence is needed in the future to confirm our findings.
This paper describes the Virtual Guide, a multimodal dialogue system represented by an embodied conversational agent that can help users to find their way in a virtual environment, while adapting its affective linguistic style to that of the user. We discuss the modular architecture of the system, and describe the entire loop from multimodal input analysis to multimodal output generation. We also describe how the Virtual Guide detects the level of politeness of the user's utterances in real-time during the dialogue and aligns its own language to that of the user, using different politeness strategies. Finally we report on our first user tests, and discuss some potential extensions to improve the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.