This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that ensures vehicle stability. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation system demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination conditions.
The purpose of this paper is to evaluate the feasibility of diagnosing multiple sclerosis (MS) using optical coherence tomography (OCT) data and a support vector machine (SVM) as an automatic classifier. Forty-eight MS patients without symptoms of optic neuritis and forty-eight healthy control subjects were selected. Swept-source optical coherence tomography (SS-OCT) was performed using a DRI (deep-range imaging) Triton OCT device (Topcon Corp., Tokyo, Japan). Mean values (right and left eye) for macular thickness (retinal and choroidal layers) and peripapillary area (retinal nerve fibre layer, retinal, ganglion cell layer—GCL, and choroidal layers) were compared between both groups. Based on the analysis of the area under the receiver operator characteristic curve (AUC), the 3 variables with the greatest discriminant capacity were selected to form the feature vector. A SVM was used as an automatic classifier, obtaining the confusion matrix using leave-one-out cross-validation. Classification performance was assessed with Matthew’s correlation coefficient (MCC) and the AUCCLASSIFIER. The most discriminant variables were found to be the total GCL++ thickness (between inner limiting membrane to inner nuclear layer boundaries), evaluated in the peripapillary area and macular retina thickness in the nasal quadrant of the outer and inner rings. Using the SVM classifier, we obtained the following values: MCC = 0.81, sensitivity = 0.89, specificity = 0.92, accuracy = 0.91, and AUCCLASSIFIER = 0.97. Our findings suggest that it is possible to classify control subjects and MS patients without previous optic neuritis by applying machine-learning techniques to study the structural neurodegeneration in the retina.
This paper presents a new method to control and guide mobile robots. In this case, to send different commands we have used electrooculography (EOG) techniques, so that, control is made by means of the ocular position (eye displacement into its orbit). A neural network is used to identify the inverse eye model, therefore the saccadic eye movements can be detected and know where user is looking. This control technique can be useful in multiple applications, but in this work it is used to guide a autonomous robot (wheelchair) as a system to help to people with severe disabilities. The system consists of a standard electric wheelchair with an on-board computer, sensors and graphical user interface running on a computer,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.