Abstract. This paper presents a driver simulator, which takes into account information about the user's state of mind (level of attention, fatigue state, stress state). The user's state of mind analysis is based on video data and biological signals. Facial movements such as eyes blinking, yawning, head rotations... are detected on video data: they are used in order to evaluate the fatigue and attention level of the driver. The user's electrocardiogram and galvanic skin response are recorded and analyzed in order to evaluate the stress level of the driver. A driver simulator software is modified so that the system is able to appropriately react to these critical situations of fatigue and stress: some audio and visual messages are sent to the driver, wheel vibrations are generated and the driver is supposed to react to the alert messages. A multi threaded system is proposed to support multi messages sent by different modalities. Strategies for data fusion and fission are also provided.
http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=958611International audienceThis paper introduces a video object segmentation algorithm developed in the context of the European project Art.live where constraints on the quality of segmentation and the processing rate (at least 10 images/second) are required. In order to obtain a fine segmentation (no blocking effect, boundaries precision, temporal stability without flickering), the segmentation process is based on Markov random field (MRF) modelling which involves consecutive frame difference and a reference image in a unified way. Temporal changes of the luminance are predominant when the reference image is not yet available whereas the reference image prevails for low textured moving objects or for objects which stop moving for a while. The increased processing rate comes from the substitution of some Markovian iterations with morphological operations without loss of quality. Simulation results show the efficiency of the proposed method in term of accuracy and complexity (≃6 images/second for 352×288 pixels YUV images on a low-end processor
In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver's focus of attention detection as well as driver's fatigue state detection and prediction. Capturing and interpreting the driver's focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying its software architecture based on multimodal signal processing and multimodal interaction components considering two software platforms, OpenInterface and ICARE.
We present an algorithm that can track multiple persons and their faces simultaneously in a video sequence, even if they are completely occluded from the camera's point of view. This algorithm is based on the detection and tracking of persons masks and their faces. Face localization uses skin detection based on color information with an adaptive thresholding. In order to handle occlusions, a Kalman filter is defined for each person that allows the prediction of the person bounding box, of the face bounding box and of its speed. In case of incomplete measurements (for instance, in case of partial occlusion), a partial Kalman filtering is done. Several results show the efficiency of this method. This algorithm allows real time processing.
http://www2.computer.org/portal/web/csdl/doi/10.1109/ICPR.2006.638 http://portal.acm.org/citation.cfm?id=1172043International audienceThe problem of multiple people detection in monocular video streams is addressed. The proposed method involves a human model based on skin color and foreground information. Robustness to local motion of background and global color changes is achieved by modeling images as fields of color distributions, and robustly estimating temporal background global variations. The estimation of the human model parameters is done via Monte Carlo simulations to deal with the multimodal nature of the posterior distribution, introduced by the presence of multiple people and cluttered scene. Promising results are presented for transportation vehicles sequences
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.