Emotion recognition using images, videos, or speech as input is considered as a hot topic in the field of research over some years. With the introduction of deep learning techniques, e.g., convolutional neural networks (CNN), applied in emotion recognition, has produced promising results. Human facial expressions are considered as critical components in understanding one's emotions. This paper sheds light on recognizing the emotions using deep learning techniques from the videos. The methodology of the recognition process, along with its description, is provided in this paper. Some of the video-based datasets used in many scholarly works are also examined. Results obtained from different emotion recognition models are presented along with their performance parameters. An experiment was carried out on the fer2013 dataset in Google Colab for depression detection, which came out to be 97% accurate on the training set and 57.4% accurate on the testing set.
<span>Machine learning has been introduced in the sphere of the medical field to enhance the accuracy, precision, and analysis of diagnostics while reducing laborious jobs. With the mounting evidence, machine learning has the capability to detect mental distress like depression. Since depression is the most prevalent mental disorder in our society at present, and almost the majority of the population suffers from this issue. Hence there is an extreme need for the depression detection models, which will provide a support system and early detection of depression. This review is based on the image and video-based depression detection model using machine learning techniques. This paper analyses the data acquisition techniques along with their databases. The indicators of depression are also reviewed in this paper. The evaluation of different researches, along with their performance parameters, is summarized. The paper concludes with remarks about the techniques used and the future scope of using the image and video-based depression prediction. </span>
Affective computing is a developing interdisciplinary examination field uniting specialists and experts from different fields, from artificial intelligence, natural language processing to intellectual and sociologies. The thought behind affective computing is to give computers the aptitude of insight that will, in general, comprehend human feelings. Notwithstanding these victories, the field needs hypothetical firm establishments and efficient rules in numerous regions, especially in feeling demonstrating and developing computational models of feeling. This exploration manages affective computing to improve the exhibition of Human-Machine Interaction. This work's focal point is to distinguish the emotional state of a human utilizing deep learning procedure, i.e., Convolutional Neural Networks (CNN) containing parameters like three convolution layers, pooling layers, learning rates, two fully connected layers, batch normalizations, and dropout ratios. The Warsaw Set of Emotional Facial Expression ▻ Authors ▻ Keywords Brought to you by INTERNATIONAL ISLAMIC UNIVERSITY MALAYSIA Search Sources Lists ↗ SciVal Create account Sign in References (14) Engineering uncontrolled terms Computational model Emotion recognition Human machine interaction Learning procedures NAtural language processing Performance parameters Recognition accuracy Training and testing Engineering main heading: Convolutional neural networks Funding details Funding sponsor Funding number Acronym
Indoor Positioning System (IPS) has been widely used in today’s industry for the various purposes of locating people or objects such as inspection, navigation, and security. Many research works have been done to develop the system by using wireless technology such as Bluetooth and Wi-Fi. The techniques that can give some better performances in terms of accuracy have been investigated and developed. In this paper, ZigBee IEEE 802.15.4 wireless communication protocols are used to implement an indoor localization application system. The research is focusing more on analyzing the behaviour of Received Signal Strength Indicator (RSSI) reading under several conditions and locations by applying the Trilateration algorithm for localizing. The conditions are increasing the number of transmitters, experimented in the non-wireless connection room and wireless connection room by comparing the variation of RSSI values. Analysis of the result shows that the accuracy of the system was improved as the number of transmitters was increased.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.