This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Abstract-This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented.Index Terms-Driver vigilance, eyelid movement, face position, fuzzy classifier, percent eye closure (PERCLOS), visual fatigue behaviors.
This paper presents a new real-time hierarchical (topological/metric) simultaneous localization and mapping (SLAM) system. It can be applied to the robust localization of a vehicle in large-scale outdoor urban environments, improving the current vehicle navigation systems, most of which are only based on Global Positioning System (GPS). Then, it can be used on autonomous vehicle guidance with recurrent trajectories (bus journeys, theme park internal journeys, etc.). It is exclusively based on the information provided by both a low-cost, wide-angle stereo camera and a low-cost GPS. Our approach divides the whole map into local submaps identified by the so-called fingerprints (vehicle poses). In this submap level (low-level SLAM), a metric approach is carried out. There, a 3-D sequential mapping of visual natural landmarks and the vehicle location/orientation are obtained using a top-down Bayesian method to model the dynamic behavior. GPS measurements are integrated within this low-level improving vehicle positioning. A higher topological level (high-level SLAM) based on fingerprints and the MultiLevel Relaxation (MLR) algorithm has been added to reduce the global error within the map, keeping real-time constraints. This level provides nearly consistent estimation, keeping a small degradation with GPS unavailability. Some experimental results for large-scale outdoor urban environments are presented, showing an almost constant processing time. Index Terms-Global Positioning System (GPS), outdoor simultaneous localization and mapping (SLAM), stereovision, vehicle navigation system.
One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control.
The present work presents a model based on fuzzy logic tools to predict and simulate the hot metal temperature in a blast furnace (BF). As input variables this model uses the control variables of a current BF such as moisture, pulverised coal injection, oxygen addition, mineral/coke ratio and blast volume, and it yields as a result of the hot metal temperature. The variables employed to develop the model have been obtained from data supplied by current sensors of a Spanish BF. In the model training stage the adaptive neurofuzzy inference system and the subtractive clustering algorithms have been used.
This paper presents a new method to control and guide mobile robots. In this case, to send different commands we have used electrooculography (EOG) techniques, so that, control is made by means of the ocular position (eye displacement into its orbit). A neural network is used to identify the inverse eye model, therefore the saccadic eye movements can be detected and know where user is looking. This control technique can be useful in multiple applications, but in this work it is used to guide a autonomous robot (wheelchair) as a system to help to people with severe disabilities. The system consists of a standard electric wheelchair with an on-board computer, sensors and graphical user interface running on a computer,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.