Abstract:According to the Center for Disease Control and Prevention (CDC), the average human life expectancy is 78.8 years. Specifically, 3.2 million deaths are reported yearly due to heart disease, cancer, Alzheimer’s disease, diabetes, and COVID-19. Diagnosing the disease is mandatory in the current way of living to avoid unfortunate deaths and maintain average life expectancy. CMOS image sensor (CIS) became a prominent technology in assisting the monitoring and clinical diagnosis devices to treat diseases in the med… Show more
“…The second direction utilizes the information from physiological signals such as electrocardiogram (E.C.G) [ 10 ], electroencephalogram (E.E.G) [ 11 ], photo plethysmography (P.P.G) [ 12 ], electrical dermal activity (E.D.A) [ 13 ] by connecting physical sensors to the human body [ 14 ] that is difficult to focus on driving due to these body-worn sensors. To overcome this problem, the third direction is evolved by analyzing the image captured by the camera sensor inside the vehicle to track the driver’s emotions without any physical contact.…”
Machine and deep learning techniques are two branches of artificial intelligence that have proven very efficient in solving advanced human problems. The automotive industry is currently using this technology to support drivers with advanced driver assistance systems. These systems can assist various functions for proper driving and estimate drivers’ capability of stable driving behavior and road safety. Many studies have proved that the driver’s emotions are the significant factors that manage the driver’s behavior, leading to severe vehicle collisions. Therefore, continuous monitoring of drivers’ emotions can help predict their behavior to avoid accidents. A novel hybrid network architecture using a deep neural network and support vector machine has been developed to predict between six and seven driver’s emotions in different poses, occlusions, and illumination conditions to achieve this goal. To determine the emotions, a fusion of Gabor and LBP features has been utilized to find the features and been classified using a support vector machine classifier combined with a convolutional neural network. Our proposed model achieved better performance accuracy of 84.41%, 95.05%, 98.57%, and 98.64% for FER 2013, CK+, KDEF, and KMU-FED datasets, respectively.
“…The second direction utilizes the information from physiological signals such as electrocardiogram (E.C.G) [ 10 ], electroencephalogram (E.E.G) [ 11 ], photo plethysmography (P.P.G) [ 12 ], electrical dermal activity (E.D.A) [ 13 ] by connecting physical sensors to the human body [ 14 ] that is difficult to focus on driving due to these body-worn sensors. To overcome this problem, the third direction is evolved by analyzing the image captured by the camera sensor inside the vehicle to track the driver’s emotions without any physical contact.…”
Machine and deep learning techniques are two branches of artificial intelligence that have proven very efficient in solving advanced human problems. The automotive industry is currently using this technology to support drivers with advanced driver assistance systems. These systems can assist various functions for proper driving and estimate drivers’ capability of stable driving behavior and road safety. Many studies have proved that the driver’s emotions are the significant factors that manage the driver’s behavior, leading to severe vehicle collisions. Therefore, continuous monitoring of drivers’ emotions can help predict their behavior to avoid accidents. A novel hybrid network architecture using a deep neural network and support vector machine has been developed to predict between six and seven driver’s emotions in different poses, occlusions, and illumination conditions to achieve this goal. To determine the emotions, a fusion of Gabor and LBP features has been utilized to find the features and been classified using a support vector machine classifier combined with a convolutional neural network. Our proposed model achieved better performance accuracy of 84.41%, 95.05%, 98.57%, and 98.64% for FER 2013, CK+, KDEF, and KMU-FED datasets, respectively.
“…Ooi et al [ 11 ] in 2016 proposed a driver emotion recognition framework based on electrodermal activity (E.D.A.) measurements with medical diagnosable physical sensors [ 38 ] using SVMs to predict the driver’s emotions. In 2010, Nasoz et al [ 39 ] introduced a driver emotion system using K.N.N.…”
Section: Related Workmentioning
confidence: 99%
“…Among all these works, some results [ 15 , 25 , 26 , 28 , 31 , 33 ] have proposed systems running in a non-car environment, whereas works [ 20 , 29 , 37 , 40 , 41 , 42 ] have been conducted in a real-time environment. Some results [ 14 , 16 , 17 , 18 , 24 , 30 , 38 , 39 ] have used a simulator environment.…”
Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers’ expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.
“…Figure 1 depicts the causes of heart disease: cardiovascular arrests, coronary artery disease (CAD), vascular disease, circulatory diseases, etc. To prevent tragic deaths and preserve the average lifespan, a condition must be diagnosed [6]. The Internet of Things (IoT), computer networking, and 5G are examples of automated networks that are used in healthcare.…”
Heart disease (HD) has surpassed all other causes of death in recent years. Estimating one’s risk of developing heart disease is difficult, since it takes both specialized knowledge and practical experience. The collection of sensor information for the diagnosis and prognosis of cardiac disease is a recent application of Internet of Things (IoT) technology in healthcare organizations. Despite the efforts of many scientists, the diagnostic results for HD remain unreliable. To solve this problem, we offer an IoT platform that uses a Modified Self-Adaptive Bayesian algorithm (MSABA) to provide more precise assessments of HD. When the patient wears the smartwatch and pulse sensor device, it records vital signs, including electrocardiogram (ECG) and blood pressure, and sends the data to a computer. The MSABA is used to determine whether the sensor data that has been obtained is normal or abnormal. To retrieve the features, the kernel discriminant analysis (KDA) is used. By contrasting the suggested MSABA with existing models, we can summarize the system’s efficacy. Findings like accuracy, precision, recall, and F1 measures show that the suggested MSABA-based prediction system outperforms competing approaches. The suggested method demonstrates that the MSABA achieves the highest rate of accuracy compared to the existing classifiers for the largest possible amount of data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.