Proceedings. The IEEE 5th International Conference on Intelligent Transportation Systems
DOI: 10.1109/itsc.2002.1041208
|View full text |Cite
|
Sign up to set email alerts
|

Driver blink measurement by the motion picture processing and its application to drowsiness detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(16 citation statements)
references
References 2 publications
0
16
0
Order By: Relevance
“…Using video recordings, eyelid movement is visible in the images and can be assessed using image processing methods. Different algorithms for that purpose are based on either the motion detection derived from differencing two consecutive images (e.g., Bhaskar, Keat, Ranganath, & Venkatesh, 2003;Chau & Betke, 2005;Fogelton & Benesova, 2016;Jiang, Tien, Huang, Zheng, & Atkins, 2013), a second-order derivative method of image differentiations (Gorodnichy, 2003), a state classification (e.g., Choi, Han, & Kim, 2011;Missimer & Betke, 2010;Pan, Sun, & Wu, 2008;Pan, Sun, Wu, & Lao, 2007), an evaluation of the color contrast or amount of visible color of specific eye regions (Cohn, Xiao, Moriyama, Ambadar, & Kanade, 2003;Danisman, Bilasco, Djeraba, & Ihaddadene, 2010;Lee, Lee, & Park, 2010), the distance between landmarks or arcs representing the upper and lower eyelid (Fuhl et al, 2016;Ito, Mita, Kozuka, Nakano, & Yamamoto, 2002;Miyakawa, Takano, & Nakamura, 2004;Moriyama et al, 2002;Sukno, Pavani, Butakoff, & Frangi, 2009), the missing regions of the open eye like the iris or pupil due to their occlusion by the upper and lower eyelid (Hansen & Pece, 2005;Pedrotti, Lei, Dzaack, & Rötting, 2011), or a combination of the described methods (Sirohey, Rosenfeld, & Duric, 2002). Instead of measuring the real distance between the upper and lower eyelid, most of these algorithms use an indirect measure (motion detection, classification, color contrast, missing eye regions) to conclude whether the eye is closed.…”
Section: Blink Detection Methodsmentioning
confidence: 99%
“…Using video recordings, eyelid movement is visible in the images and can be assessed using image processing methods. Different algorithms for that purpose are based on either the motion detection derived from differencing two consecutive images (e.g., Bhaskar, Keat, Ranganath, & Venkatesh, 2003;Chau & Betke, 2005;Fogelton & Benesova, 2016;Jiang, Tien, Huang, Zheng, & Atkins, 2013), a second-order derivative method of image differentiations (Gorodnichy, 2003), a state classification (e.g., Choi, Han, & Kim, 2011;Missimer & Betke, 2010;Pan, Sun, & Wu, 2008;Pan, Sun, Wu, & Lao, 2007), an evaluation of the color contrast or amount of visible color of specific eye regions (Cohn, Xiao, Moriyama, Ambadar, & Kanade, 2003;Danisman, Bilasco, Djeraba, & Ihaddadene, 2010;Lee, Lee, & Park, 2010), the distance between landmarks or arcs representing the upper and lower eyelid (Fuhl et al, 2016;Ito, Mita, Kozuka, Nakano, & Yamamoto, 2002;Miyakawa, Takano, & Nakamura, 2004;Moriyama et al, 2002;Sukno, Pavani, Butakoff, & Frangi, 2009), the missing regions of the open eye like the iris or pupil due to their occlusion by the upper and lower eyelid (Hansen & Pece, 2005;Pedrotti, Lei, Dzaack, & Rötting, 2011), or a combination of the described methods (Sirohey, Rosenfeld, & Duric, 2002). Instead of measuring the real distance between the upper and lower eyelid, most of these algorithms use an indirect measure (motion detection, classification, color contrast, missing eye regions) to conclude whether the eye is closed.…”
Section: Blink Detection Methodsmentioning
confidence: 99%
“…10,13,19,29,35,37 The comparison of the approaches is not easy, because results are reported in different nonstandard ways. 5,8,9,15,34 For a comprehensive survey on eye and gaze tracking models and approaches see ref. 8,27 However, it is possible to say that approaches based on color analysis are limited by illumination conditions and thus cannot be employed at night.…”
Section: State Of the Artmentioning
confidence: 99%
“…The pupil's estimated position can in this case be found aŝ e s = c s k +d s k (11) whered s k is an estimate of the pupil's location vector relative to the centroid. An estimation of the relative position vector d s k is found by means of a simple fi rst-order IIR running average fi lter of the form…”
Section: A Eyes Location and Head-pose Estimationmentioning
confidence: 99%
“…This has motivated some researchers to use near infrared (IR) cameras, exploiting the retinas' high reflectivity to 850 nm wavelength illumination [8], [14]. Some approaches employ neural-networks to extract the head and main features [4], [23], while other rely on a variety of template matching schemes [2], [6], [5], [11], [26].…”
Section: Introductionmentioning
confidence: 99%