PurposeThe goal of this study is to test the real-world use of an emotion recognition system.Design/methodology/approachThe researchers chose an existing algorithm that displayed high accuracy and speed. Four emotions: happy, sadness, anger and surprise, are used from six of the universal emotions, associated by their own mood markers. The mood-matrix interface is then coded as a web application. Four guidance counselors and 10 students participated in the testing of the mood-matrix. Guidance counselors answered the technology acceptance model (TAM) to assess its usefulness, and the students answered the general comfort questionnaire (GCQ) to assess their comfort levels.FindingsResults from TAM found that the mood-matrix has significant use for the guidance counselors and the GCQ finds that the students were comfortable during testing.Originality/valueNo study yet has tested an emotion recognition system applied to counseling or any mental health or psychological transactions.
Fruit classification is a computer vision task that aims to classify fruit classes correctly, given an image. Nearly all fruit classification studies have used RGB color images as inputs, a few have used costly hyperspectral images, and a few classical ML-based have used colorized depth images. Depth images have apparent benefits such as invariance to lighting, less storage requirement, better foreground-background separation, and more pronounced curvature details and object edge discontinuities. However, the use of depth images in CNN-based fruit classification remains unexplored. The purpose of this study is to investigate the use of colorized depth images in fruit classification with four CNN models, namely, AlexNet, GoogleNet, ResNet101, and VGG16, and compare their performance and computational efficiency, as well as the impact of transfer learning. Depth images of apple, orange, mango, banana and rambutan (Nephelium Lappaceum) were manually collected using a depth sensor with sub-millimeter accuracy and subjected to jet, uniform, and inverse colorization to produce three sets of dataset. Results show that depth images can be used to train CNN models for fruit classification with ResNet101 achieving the best accuracy of 96% on the inverse dataset. It achieved 100% accuracy after transfer learning. GoogleNet showed the most significant improvement after transfer learning on the uniform dataset, at 12.27%. It also exhibited the lowest training and inference times. The results show the potential use of depth images for fruit classification and similar computer vision tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.