We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.
Background: Assistive robotic manipulators (ARMs) have been developed to provide enhanced assistance and independence in performance of daily activities among people with spinal cord injury when a caregiver is not on site. However, the current commercial ARM user interfaces (UIs) may be difficult to learn and control. A touchscreen mobile UI was developed to overcome these challenges. Objective: The object of this study was to evaluate the performance between 2 ARM UIs, touchscreen and the original joystick, using an ARM evaluation tool (ARMET). Methods: This is a pilot study of people with upper extremity impairments (N = 8). Participants were trained on 2 UIs, and then they chose one to use when performing 3 tasks on the ARMET: flipping a toggle switch, pushing down a door handle, and turning a knob. Task completion time, mean velocity, and open interviews were the main outcome measurements. Results: Among 8 novice participants, 7 chose the touchscreen UI and 1 chose the joystick UI. All participants could complete the ARMET tasks independently. Use of the touchscreen UI resulted in enhanced ARMET performance (higher mean moving speed and faster task completion). Conclusions: Mobile ARM UIs demonstrated easier learning experience, less physical effort, and better ARMET performance. The improved performance, the accessibility, and lower physical effort suggested that the touchscreen UI might be an efficient tool for the ARM users.
We have developed an intelligent single switch scanning interface and wheelchair navigation assistance system, called ISSWN, to improve driving safety, comfort, and efficiency for individuals who rely on single switch scanning as a control method. ISSWN combines a standard powered wheelchair with a laser rangefinder, a single switch scanning interface and a computer. It provides the user with context sensitive and task specific scanning options that reduce driving effort, based on an interpretation of sensor data together with user input. Trials performed by 9 able-bodied participants showed that the system significantly improved driving safety and efficiency in a navigation task.
The research and development of assistive robotic manipulators (ARMs) aims to enhance the upper-extremity daily functioning of individuals with disability. Resources continue to be invested, yet the field still lacks a standard framework to serve as a tool for the functional assessment and performance evaluation of ARMs. A review of the literature lends several suggestions from research in occupational therapy, rehabilitation robotics, and human-robot interaction. Performance assessments are often used during rehabilitation intervention by occupational therapists to evaluate a client's functional performance. Similarly, such assessments should be developed to make predictions regarding how ARM performance in a clinical setting may generalize to task execution throughout daily living. However, ergonomics and environmental differences have largely been ignored in past research. Additional insights from the literature provide suggestions for a common set of coding definitions, and a framework to organize the ad hoc performance measures observed across ARM studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.