The introduction of a modern image recognition that has millions of parameters and requires a lot of training data as well as high computing power that is hungry for energy consumption so it becomes inefficient in everyday use. Machine Learning has changed the computing paradigm, from complex calculations that require high computational power to environmentally friendly technologies that can efficiently meet daily needs. To get the best training model, many studies use large numbers of datasets. However, the complexity of large datasets requires large devices and requires high computing power. Therefore large computational resources do not have high flexibility towards the tendency of human interaction which prioritizes the efficiency and effectiveness of computer vision. This study uses the Convolutional Neural Networks (CNN) method with MobileNet architecture for image recognition on mobile devices and embedded devices with limited resources with ARM-based CPUs and works with a moderate amount of training data (thousands of labeled images). As a result, the MobileNet v1 architecture on the ms8pro device can classify the caltech101 dataset with an accuracy rate 92.4% and 2.1 Watt power draw. With the level of accuracy and efficiency of the resources used, it is expected that MobileNet's architecture can change the machine learning paradigm so that it has a high degree of flexibility towards the tendency of human interaction that prioritizes the efficiency and effectiveness of computer vision.
Data analysis to identifying attacks/anomalies is a crucial task in anomaly detection and network anomaly detection itself is an important issue in network security. Researchers have developed methods and algorithms for the improvement of the anomaly detection system. At the same time, survey papers on anomaly detection researches are available. Nevertheless, this paper attempts to analyze futher and to provide alternative taxonomy on anomaly detection researches focusing on methods, types of anomalies, data repositories, outlier identity and the most used data type. In addition, this paper summarizes information on application network categories of the existing studies.
Classification of facial expressions has become an essential part of computer systems and human-computer fast interaction. It is employed in various applications such as digital entertainment, customer service, driver monitoring, and emotional robots. Moreover, it has been studied through several aspects related to the face itself when facial expressions change based on the point of view or perspective. Facial curves such as eyebrows, nose, lips, and mouth will automatically change. Most of the proposed methods have limited frontal Face Expressions Recognition (FER), and their performance decrease when handling non-frontal and multi-view FER cases. This study combined both methods in the classification of facial expressions, namely the Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) methods. The results of this study proved to be more accurate than that of previous studies. The combination of PCA and CNN methods in the Static Facial Expressions in The Wild (SFEW) 2.0 dataset obtained an accuracy amounting to 70.4%; the CNN method alone only obtained an accuracy amounting to 60.9%.
Fake news is false information that looks like it is true. News can also be said as a political weapon whose truth cannot be accounted for which is spread intentionally to achieve a certain goal. Classification of news texts requires calculating a method for each word in the document. Each word processed per document means that the number of data dimensions is equal to the number of words. The more the number of words in a document, the more the number of dimensions in each data (high dimension). The large number of dimensions (high dimension), causes the model-making process (training) to be long and the shortcomings are also clearly visible in seeing the similarity of documents (document similarity). The dataset used in this study amounted to 20000 and 17 attributes. The method used in this study uses a Random Forest Classifier (RFC), Support Vector Machine (SVM) and Logistic Regression (LR) with high dimensions and the results of this study are to obtain a comparison of the accuracy values for each method used
<span>Handwriting analysis has wide scopes include recruitment, medical diagnosis, forensic, psychology, and human-computer interaction. Computerized handwriting analysis makes it easy to recognize human personality and can help graphologists to understand and identify it. The features of handwriting use as input to classify a person’s personality traits. This paper discusses a pattern recognition point of view, in which different stages are described. The stages of study are data collection and pre-processing technique, feature extraction with associated personality characteristics, and the classification model. Therefore, the purpose of this paper is to present a review of the methods and their achievements used in various stages of a pattern recognition system. </span>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.