The introduction of a modern image recognition that has millions of parameters and requires a lot of training data as well as high computing power that is hungry for energy consumption so it becomes inefficient in everyday use. Machine Learning has changed the computing paradigm, from complex calculations that require high computational power to environmentally friendly technologies that can efficiently meet daily needs. To get the best training model, many studies use large numbers of datasets. However, the complexity of large datasets requires large devices and requires high computing power. Therefore large computational resources do not have high flexibility towards the tendency of human interaction which prioritizes the efficiency and effectiveness of computer vision. This study uses the Convolutional Neural Networks (CNN) method with MobileNet architecture for image recognition on mobile devices and embedded devices with limited resources with ARM-based CPUs and works with a moderate amount of training data (thousands of labeled images). As a result, the MobileNet v1 architecture on the ms8pro device can classify the caltech101 dataset with an accuracy rate 92.4% and 2.1 Watt power draw. With the level of accuracy and efficiency of the resources used, it is expected that MobileNet's architecture can change the machine learning paradigm so that it has a high degree of flexibility towards the tendency of human interaction that prioritizes the efficiency and effectiveness of computer vision.
Data analysis to identifying attacks/anomalies is a crucial task in anomaly detection and network anomaly detection itself is an important issue in network security. Researchers have developed methods and algorithms for the improvement of the anomaly detection system. At the same time, survey papers on anomaly detection researches are available. Nevertheless, this paper attempts to analyze futher and to provide alternative taxonomy on anomaly detection researches focusing on methods, types of anomalies, data repositories, outlier identity and the most used data type. In addition, this paper summarizes information on application network categories of the existing studies.
Classification of facial expressions has become an essential part of computer systems and human-computer fast interaction. It is employed in various applications such as digital entertainment, customer service, driver monitoring, and emotional robots. Moreover, it has been studied through several aspects related to the face itself when facial expressions change based on the point of view or perspective. Facial curves such as eyebrows, nose, lips, and mouth will automatically change. Most of the proposed methods have limited frontal Face Expressions Recognition (FER), and their performance decrease when handling non-frontal and multi-view FER cases. This study combined both methods in the classification of facial expressions, namely the Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) methods. The results of this study proved to be more accurate than that of previous studies. The combination of PCA and CNN methods in the Static Facial Expressions in The Wild (SFEW) 2.0 dataset obtained an accuracy amounting to 70.4%; the CNN method alone only obtained an accuracy amounting to 60.9%.
<span>Handwriting analysis has wide scopes include recruitment, medical diagnosis, forensic, psychology, and human-computer interaction. Computerized handwriting analysis makes it easy to recognize human personality and can help graphologists to understand and identify it. The features of handwriting use as input to classify a person’s personality traits. This paper discusses a pattern recognition point of view, in which different stages are described. The stages of study are data collection and pre-processing technique, feature extraction with associated personality characteristics, and the classification model. Therefore, the purpose of this paper is to present a review of the methods and their achievements used in various stages of a pattern recognition system. </span>
Fake news is false information that looks like it is true. News can also be said as a political weapon whose truth cannot be accounted for which is spread intentionally to achieve a certain goal. Classification of news texts requires calculating a method for each word in the document. Each word processed per document means that the number of data dimensions is equal to the number of words. The more the number of words in a document, the more the number of dimensions in each data (high dimension). The large number of dimensions (high dimension), causes the model-making process (training) to be long and the shortcomings are also clearly visible in seeing the similarity of documents (document similarity). The dataset used in this study amounted to 20000 and 17 attributes. The method used in this study uses a Random Forest Classifier (RFC), Support Vector Machine (SVM) and Logistic Regression (LR) with high dimensions and the results of this study are to obtain a comparison of the accuracy values for each method used
The high number of credit card fraud causes a lot of losses for both users and credit service providers. Because the rate of credit card transactions is very fast, it is necessary to detect credit card fraud as early as possible. However, another challenge that is no less important is the amount of data that is imbalanced between valid and invalid transactions. One solution to the problem of data imbalance is to use a resampling method that can improve the quantity of data so that the accuracy results are good. In this study, three types of resampling methods were implemented, SMOTE, bootstrap, and jackknife. Furthermore, to validate the success of the resampling method, three types of machine learning methods were used. The machine learning methods are SVM, ANN, and random forest. From the test results, it was found that the combination of resampling SMOTE and random forest methods produced the best performance with values of accuracy, precision, recall and F1-score of 99.95%, 81.63%, 90.91% and 86.02%, respectively.
The research instrument is used to collect data or measure the object of a study. The purpose of this study was to determine which instruments were declared valid and reliable and to find out the variables with the highest validity and reliability values. Validity testing is carried out to determine the effectiveness of an instrument, while reliability testing is carried out to show the level of reliability of the indicators used. Testing the validity and reliability using software SmartPLS version 3.3.2 with a measurement scale that is Likert scale. Validity testing is done by looking at the average variance extracted (AVE) value and the comparison of the latent variable correlations values, while reliability testing is done by looking at the composite reliability value. The population in this study were active students at the State Islamic University (UIN) Raden Fatah Palembang, totaling 19,260 students with the determination of the sample using the Slovin formula with a level of significance = 5%. Data collection in this study was carried out by distributing online questionnaires. The questionnaire was made based on indicators on the model used, where the models used were UTAUT 2 and EUCS. The UTAUT 2 model can be used to measure the level of user acceptance of the system consisting of the variables of performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, perceived value, habit, and behavioral intention. Furthermore, the EUCS model is used to measure the level of user satisfaction which consists of content, accuracy, format, ease of use, timeliness, and user satisfaction variables. The results of the validity and reliability testing state that all indicators are valid and reliable with an AVE value > 0.50 in the validity test and the composite reliability value > 0.70 in the reliability test. The validity test with the highest value is found in the ease of use variable with an AVE value of 0.826 and reliability testing with the highest value is found in the performance expectancy variable with a composite reliability value of 0.924. With this research, it is expected to obtain variables in the model to evaluate user acceptance and satisfaction with academic information systems
This study aims to determine the effectiveness of using the STEM-Based Physics Mobile Learning App as a learning resource for students in Indonesia on learning outcomes. The method used in this research is the experimental method. To describe the experimental results, statistical analysis techniques were used, namely the N-Gain technique. The research was conducted at SMAN 1, Air Sugihan, Ogan Komering Ilir Regency. The analysis of the data reveals that the improvement in learning outcomes in the experimental class compared to the control class is evidence of the usage of STEM-based high school physics learning applications as a learning resource for teachers and students. The experimental class's average post-test score was 81.1, while the control class' average post-test score was 72.22
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.