International audienceNavigating within an unknown indoor environment using an electric wheelchair is a challenging task, especially if the user suffers from severe disabilities. In order to reduce fatigability and increase autonomy, control architectures have to be designed that would assist users in wheelchair navigation. We present a framework for vision-based autonomous indoor navigation in an electric wheelchair capable of following corridors, and passing through open doorways using a single doorpost. Visual features extracted from cameras on board the wheelchair are used as inputs for image based controllers built-in the wheelchair. It has to be noted that no a-priori information is utilised except for the assumption that the wheelchair moves in a typical indoor environment while the system is coarsely calibrated. The designed control schemes have been implemented onto a robotized wheelchair and experimental results show the robust behaviour of the designed system
Motor or visual impairments may prevent a user from steering a wheelchair effectively in indoor environments. In such cases, joystick jerks arising from uncontrolled motions may lead to collisions with obstacles. We here propose a perceptive shared control system that progressively corrects the trajectory as a user manually drives the wheelchair, by means of a sensor-based shared control law capable of smoothly avoiding obstacles. This control law is based on a low complex optimization framework validated through simulations and extensive clinical trials. The provided model uses distance information. Therefore, for low-cost considerations, we use ultrasonic sensors to measure the distances around the wheelchair. The solution therefore provides an efficient assistive tool that does not alter the quality of experience perceived by the user, while ensuring his security in hazardous situations.
Autonomy and social inclusion can reveal themselves everyday challenges for people experiencing mobility impairments. These people can benefit from technical aids such as power wheelchairs to access mobility and overcome social exclusion. However, power wheelchair driving is a challenging task which requires good visual, cognitive and visuo-spatial abilities. Besides, a power wheelchair can cause material damage or represent a danger of injury for others or oneself if not operated safely. Therefore, training and repeated practice are mandatory to acquire safe driving skills to obtain power wheelchair prescription from therapists. However, conventional training programs may reveal themselves insufficient for some people with severe impairments. In this context, Virtual Reality offers the opportunity to design innovative learning and training programs while providing realistic wheelchair driving experience within a virtual environment. In line with this, we propose a user-centered design of a multisensory power wheelchair simulator. This simulator addresses classical virtual experience drawbacks such as cybersickness and sense of presence by combining 3D visual rendering, haptic feedback and motion cues. It relies on a modular and versatile workflow enabling not only easy interfacing with any virtual display, but also with any user interface such as wheelchair controllers or feedback devices. This paper presents the design of the first implementation as well as its first commissioning through pretests. The first setup achieves consistent and realistic behavior.
In case of motor impairments, steering a wheelchair can become a hazardous task. Joystick jerks induced by uncontrolled motions may lead to wall collisions when a user steers a wheelchair along a corridor. This work introduces a low-cost assistive and guidance system for indoor corridor navigation in a wheelchair, which uses purely visual information, and which is capable of providing automatic trajectory correction and haptic guidance in order to avoid wall collisions. A visual servoing approach to autonomous corridor following serves as the backbone to this system. The algorithm employs natural image features which can be robustly extracted in real time. This algorithm is then fused with manual joystick input from the user so that progressive assistance and trajectory correction can be activated as soon as the user is in danger of collision. A force feedback in conjunction with the assistance is provided on the joystick in order to guide the user out of his dangerous trajectory. This ensures intuitive guidance and minimal interference from the trajectory correction system. In addition to being a low-cost approach, it can be seen that the proposed solution does not require an a-priori environment model. Experiments on a robotised wheelchair equipped with a monocular camera prove the capability of the system to adaptively guide and assist a user navigating in a corridor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.