This informed consent form is to make sure that you understand the nature of your involvement in this study, and to obtain your informed consent to participate in this study.Procedure: You will be asked to sit in the driver's seat of a parked car. Three different videos have to be taken from the frontal view of your face in conditions of yawning, talking, and normal closed mouth. The entire recording session will last about 5 minutes.Withdrawing from the study: Your participation in this study is voluntary. You may withdraw from the study at any time, by verbally informing the investigator or any of the researchers, even after signing the form. There will be no consequences following this action.Compensation: You will not receive monetary compensation for this study.Disclosure: You agree for the above-mentioned videos captured of you to be made available to researchers, from all over the world, for research and further study, on publically-accessible websites and for non-commercial purposes.Confidentiality: ther than the above "Disclosure", all other information about you collected during the study will be kept strictly confidential. Your name will not be associated with the collected data in any way. The data collection will be conducted by Dr. Shirmohammadi, his graduate students, or research fellows, or his research assistant.In closing: With your participation, you will be given a copy of this consent form. At the conclusion of the study, should you wish, you will be provided with a summary of the results. You may ask questions at any time, even after signing this consent form.Signatures: I have read the above description of the study and understand the conditions of participation. My signature indicates that I agree to participate in the study.
In this paper, we present two video datasets of drivers with various facial characteristics, to be used for designing and testing algorithms and models for yawning detection. For collecting these videos, male and female candidates were asked to sit in the driver's seat of a car. The videos are taken in real and varying illumination conditions. In the first dataset, the camera is installed under the front mirror of the car. Each participant has three or four videos and each video contains different mouth conditions such as normal, talking/singing, and yawning. In the second dataset, the camera is installed on the dash in front of the driver, and each participant has one video with the above-mentioned different mouth conditions all in the same video. The car was parked for both datasets to keep the environment safe for the participants. As a benchmark, we also present the results of our own yawning detection method, and show that we can achieve a much higher accuracy in the scenario with the camera installed on the dash in front of the driver.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.