“…Face detection considering Haar feature-based cascade classifiers is a famous face detection model Aguilar et al [30] and Viola et al [31] due to its simplicity and robustness. Inspired by the mode, where we train a cascade function considering ground truth faces with their labels.…”
We present a new method for human facial emotions recognition. For this purpose, initially, we detect faces in the images by using the famous cascade classifiers. Subsequently, we then extract a localized regional descriptor (LRD) which represents the features of a face based on regional appearance encoding. The LRD formulates and models various spatial regional patterns based on the relationships between local areas themselves instead of considering only raw and unprocessed intensity features of an image. To classify facial emotions into various classes of facial emotions, we train a multiclass support vector machine (M-SVM) classifier which recognizes these emotions during the testing stage. Our proposed method takes into account robust features and is independent of gender and facial skin color for emotion recognition. Moreover, our method is illumination and orientation invariant. We assessed our method on two benchmark datasets and compared it with four reference methods. Our proposed method outperformed them considering both the datasets.
“…Face detection considering Haar feature-based cascade classifiers is a famous face detection model Aguilar et al [30] and Viola et al [31] due to its simplicity and robustness. Inspired by the mode, where we train a cascade function considering ground truth faces with their labels.…”
We present a new method for human facial emotions recognition. For this purpose, initially, we detect faces in the images by using the famous cascade classifiers. Subsequently, we then extract a localized regional descriptor (LRD) which represents the features of a face based on regional appearance encoding. The LRD formulates and models various spatial regional patterns based on the relationships between local areas themselves instead of considering only raw and unprocessed intensity features of an image. To classify facial emotions into various classes of facial emotions, we train a multiclass support vector machine (M-SVM) classifier which recognizes these emotions during the testing stage. Our proposed method takes into account robust features and is independent of gender and facial skin color for emotion recognition. Moreover, our method is illumination and orientation invariant. We assessed our method on two benchmark datasets and compared it with four reference methods. Our proposed method outperformed them considering both the datasets.
“…Also, those that decided to run the experiment in a portable resource-constrained execution environment rely on classical computer vision methods and do not use CNN-based object detectors (Chiu, 2014;Martinez-de Dios, 2001;Wang et al, 2016;Yong and Yeong, 2018). Typically, Haar-like (Aguilar, 2017;Rudol & Doherty, 2018) and SVM techniques (Bejiga, 2016;Zhou, Yuan, Yen, & Bastani, 2016) are utilized due to their fast performance and easy implementation. Nevertheless, the higher accuracy of CNN-based object detectors comes at the expense of higher computational resources compared to classical methods.…”
Section: Uav Analysismentioning
confidence: 99%
“…The most popular platform so far is OpenCV (Aguilar, 2017;Rudol & Doherty, 2018;Xu, Yu, Wu, Wang, & Ma, 2017). Accuracy is also specified in most the cases although some of the algorithms are evaluated in their own data set.…”
Existing artificial intelligence solutions typically operate in powerful platforms with high computational resources availability. However, a growing number of emerging use cases such as those based on unmanned aerial systems (UAS) require new solutions with embedded artificial intelligence on a highly mobile platform. This paper proposes an innovative UAS that explores machine learning (ML) capabilities in a smartphone‐based mobile platform for object detection and recognition applications. A new system framework tailored to this challenging use case is designed with a customized workflow specified. Furthermore, the design of the embedded ML leverages TensorFlow, a cutting‐edge open‐source ML framework. The prototype of the system integrates all the architectural components in a fully functional system, and it is suitable for real‐world operational environments such as seek and rescue use cases. Experimental results validate the design and prototyping of the system and demonstrate an overall improved performance compared with the state of the art in terms of a wide range of metrics.
“…The realization and classification of driving environments has become an important topic in many applications, such as autonomous vehicles, human-robot interaction, and humanmachine systems in image processing and computer vision. In particular, there have been many studies in the field of advanced driving assistance systems (ADAS) such as pedestrians detection [1]- [3] and automatic intersection detection [4], [5].…”
To develop effective advanced driving assistance systems, it is important to accurately recognize current driving environments and make critical decisions about driving processes. Preventing accidents through the interaction between the driving assistance systems and the environment and ensuring optimum driving dynamics are the main topics in this field. Vehicles need to recognize the road type and quality at a high accuracy to ensure the most suitable driving for the road type. It is also important to use both uncomplicated and cost-effective systems when performing this detection. In this study, a deep learning-based approach that can be used in vehicle driver assistance systems is proposed to automatically recognize road type and quality. Using this approach, it is possible to determine the road type and the quality of the road using only driving images as the input data. A new convolutional neural network model is designed for classification of the driving images. Driving images obtained from Google Street View are used to evaluate the recognition system for an actual driving environment. The proposed approach shows that the road types were determined with accuracy of 91.41 %, and the pothole road-smooth road distinction was successful at 91.07 %. It can be said that the proposed method is an effective structure that can be used for advanced driving support systems, V2I communications systems, and similar intelligent transportation systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.