Human gender is deemed as a prime demographic trait due to its various usage in the practical domain. Human gender classification in an unconstrained environment is a sophisticated task due to large variations in the image scenarios. Due to the multifariousness of internet images, the classification accuracy suffers from traditional machine learning methods. The aim of this research is to streamline the gender classification process using the transfer learning concept. This research proposes a framework that performs automatic gender classification in unconstrained internet images deploying Pareto frontier deep learning networks; GoogleNet, SqueezeNet, and ResNet50. We analyze the experiment with three different Pareto frontier Convolutional Neural Network (CNN) models pre-trained on ImageNet. The massive experiments demonstrate that the performance of the Pareto frontier CNN networks is remarkable in the unconstrained internet image dataset as well as in the frontal images that pave the way to developing an automatic gender classification system.
We consider k-dominant skyline computation when the underlying dataset is partitioned into geographically distant computing core that are connected to the coordinator (server). The existing k-dominant skyline solutions are not suitable for our problem, because they are restricted to centralized query processors, limiting scalability and imposing a single point of failure. Moreover, k-dominant skyline computation does not follow transitivity property like skyline computation. In this paper, we developed a multicore based spatial k-dominant skyline (MSKS) computation algorithm. MSKS is a feedback-driven mechanism, where the coordinator iteratively transmits data to each computing core. Computing core is able to prune a large amount of local data, which otherwise would need to be sent to the coordinator. Furthermore, it supports a userfriendly progress indicator that allows user to modify (insert, delete, and update) and monitor the progress of long running k-dominant skyline queries. Extensive performance study shows that proposed algorithm is efficient and robust to different data distributions and achieves its progressive goal with a minimal overhead.
The COVID-19 pandemic markedly changed the human shopping nature, necessitating a contactless shopping system to curb the spread of the contagious disease efficiently. Consequently, a customer opts for a store where it is possible to avoid physical contacts and shorten the shopping process with extended services such as personalized product recommendations. Automatic age and gender estimation of a customer in a smart store strongly benefit the consumer by providing personalized advertisement and product recommendation; similarly, it aids the smart store proprietor to promote sales and develop an inventory perpetually for the future retail. In our paper, we propose a deep learning-founded enterprise solution for smart store customer relationship management (CRM), which allows us to predict the age and gender from a customer’s face image taken in an unconstrained environment to facilitate the smart store’s extended services, as it is expected for a modern venture. For the age estimation problem, we mitigate the data sparsity problem of the large public IMDB-WIKI dataset by image enhancement from another dataset and perform data augmentation as required. We handle our classification tasks utilizing an empirically leading pre-trained convolutional neural network (CNN), the VGG-16 network, and incorporate batch normalization. Especially, the age estimation task is posed as a deep classification problem followed by a multinomial logistic regression first-moment refinement. We validate our system for two standard benchmarks, one for each task, and demonstrate state-of-the-art performance for both real age and gender estimation.
Human action recognition has turned into one of the most attractive and demanding fields of research in computer vision and pattern recognition for facilitating easy, smart, and comfortable ways of human-machine interaction. With the witnessing of massive improvements to research in recent years, several methods have been suggested for the discrimination of different types of human actions using color, depth, inertial, and skeleton information. Despite having several action identification methods using different modalities, classifying human actions using skeleton joints information in 3-dimensional space is still a challenging problem. In this paper, we conceive an efficacious method for action recognition using 3D skeleton data. First, large-scale 3D skeleton joints information was analyzed and accomplished some meaningful pre-processing. Then, a simple straight-forward deep convolutional neural network (DCNN) was designed for the classification of the desired actions in order to evaluate the effectiveness and embonpoint of the proposed system. We also conducted prior DCNN models such as ResNet18 and MobileNetV2, which outperform existing systems using human skeleton joints information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.