Automobiles are the most common modes of transportation in urban areas. An alert mind is a prerequisite while driving to avoid tragic accidents; however, driver stress can lead to faulty decision-making and cause severe injuries. Therefore, numerous techniques and systems have been proposed and implemented to subdue negative emotions and improve the driving experience. Studies show that conditions such as the road, state of the vehicle, weather, as well as the driver's personality, and presence of passengers can affect driver stress. All the above-mentioned factors significantly influence a driver's attention. This paper presents a detailed review of techniques proposed to reduce and recover from driving stress. These technologies can be divided into three categories: notification alert, driver assistance systems, and environmental soothing. Notification alert systems enhance the driving experience by strengthening the driver's awareness of his/her physiological condition, and thereby aid in avoiding accidents. Driver assistance systems assist and provide the driver with directions during difficult driving circumstances. The environmental soothing technique helps in relieving driver stress caused by changes in the environment. Furthermore, driving maneuvers, driver stress detection, driver stress, and its factors are discussed and reviewed to facilitate a better understanding of the topic.
Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.