Currently adoption of mobile phones and mobile applications based on Android operating system is increasing rapidly. Many companies and emerging startups are carrying out digital transformation by using mobile applications to provide disruptive digital services to replace existing old styled services. This transformation prompted the attackers to create malicious software (malware) using sophisticate methods to target victims of Android mobile phone users. The purpose of this study is to identify Android APK files by classifying them using Artificial Neural Network (ANN) and Non Neural Network (NNN). The ANN is Multi-Layer Perceptron Classifier (MLPC), while the NNN are KNN, SVM, Decision Tree, Logistic Regression and Naïve Bayes methods. The results show that the performance using NNN has decreasing accuracy when training using larger datasets. The use of the K-Nearest Neighbor algorithm with a dataset of 600 APKs achieves an accuracy of 91.2% and dataset of 14170 APKs achieves an accuracy of 88%. The using of the Support Vector Machine algorithm with the 600 APK dataset has an accuracy of 99.1% and the 14170 APK dataset has an accuracy of 90.5%. The using of the Decision Tree algorithm with the 600 APK dataset has an accuracy of 99.2%, the 14170 APK dataset has an accuracy of 90.8%. The experiment using the Multi-Layer Perceptron Classifier has increasing with the 600 APK dataset reaching 99%, the 7000 APK dataset reaching 100% and the 14170 APK dataset reaching 100%.
Automated human identification by their walking behavior is a challenge attracting much interest among machine vision researchers. However, practical systems for such identification remain to be developed. In this study, a machine learning approach to understand human behavior based on motion imagery was proposed as the basis for developing pedestrian safety information systems. At the front end, image and video processing was performed to separate foreground from background images. Shape-width was then analyzed using 2D discrete wavelet transformation and 2D fast Fourier transformation to extract human motion features. Finally, an adaptive boosting (AdaBoost) algorithm was performed to classify human gender and age into its class based on spatiotemporal information. The results demonstrated the capability of the proposed systems to classify gender and age highly accurately.
The development of Deep Learning technology is very good at detecting Objects. One of them is detection on the vehicle number plate. This method can be applied to Computer Vision to process images using DensetNet121, NasNetLarge, VGG16 and VGG19 methods. The most basic difference between Machine Learning and Deep Learning is the inclusion of a Hidden Layer and what distinguishes the Deep Learning process using neurons as a process from input, process to output. Feature extraction is done directly with the Deep Learning process. In terms of time, training models with Deep Learning are very long, when compared to Machine Learning. The dataset comes from Kaggle, then training is carried out with four Deep Learning models, resulting in a model. There are differences in conducting the training process. Before carrying out the Training process, a pre-paration process from the Image Dataset is carried out. The dataset is divided into two parts, the Training Dataset and the Testing Dataset. After the training model is completed, it is continued with the Testing process and measuring the performance of the model's accuracy. The accuracy of the four models resulting from Deep Learning training is also presented
Automated human identification from their walking behavior is a challenge attracting much interest among machine vision researchers. However, the systems which are able to detect pedestrian attributes based on their walking behavior remain to be developed. Here, a soft computing approach to determine walking behavior based on motion imagery is studied as the basis for developing pedestrian safety information systems. Gender and age are classified based on motion pattern derived in experiments. At the front end, image and video processing was performed to separate foreground from background images. The widths of silhouette were analyzed using two-dimensional (2D) Fourier transformation to extract human motion features. Feature sub-sets were then selected to find salient, effective classification features. Finally, Choquet integral agent networks (CHIAN) with a competitive learning algorithm were employed to classify gender and age into its classes. The experimental results demonstrated capability of the proposed system to classify gender and age in highly accurately.
The food industry is undergoing a phase of very good improvement, where business actors are experiencing very rapid growth. Creative ideas are many and creative on several social media. When an online business is growing rapidly, many managers in the food sector market their products through online media. So it is quite easy for customers to place orders via mobile. Especially during the COVID-19 pandemic, where a ban on gatherings has become a government recommendation for many food business actors to sell online. Since then, almost all food industry players have made their sales online. There are many advantages of doing business online. The food served is in the form of pictures that attract market visitors so that it can create its own charm. Food is just a click away to order, and the order comes. No need to queue and everything has been delivered to the ordered goods. After the ordered goods arrive, the customer reviews the food or drink. Because customer reviews are the result of customer ratings. The result of the review is one of the sentiment analyses, which in this study is in the form of a review of the images available on the display marketplace. The method used is Convolutional Neural Network. The dataset will be extracted features and classifications. The research will do a comparison using VGG19, ResNet50, and Inception-V3. Where the accuracy of VGG19 = 96.86; Resnet50 : 97.29; Inception_v3 : 97.57.
Object recognition in images is one of the problems that continues to be faced in the world of computer vision. Various approaches have been developed to address this problem, and end-to-end object detection is one relatively new approach. End-to-end object detection involves using the CNN and Transformer architectures to learn object information directly from the image and can produce very good results in object detection. In this research, we implemented ResNet-50 in an End-to-End Object Detection system to improve object detection performance in images. ResNet-50 is a CNN architecture that is well-known for its effectiveness in image recognition tasks, while DETR utilizes Transformers to study object representations directly from images. We tested our system performance on the COCO dataset and demonstrated that ResNet-50 + DETR achieves a better level of accuracy than DETR models that do not use ResNet-50. In addition, we also show that ResNet-50 + DETR can detect objects more quickly than similar traditional CNN models. The results of our research show that the use of ResNet-50 in the DETR system can improve object detection performance in images by about 90%. We also show that using ResNet-50 in DETR systems can improve object detection speed, which is a huge advantage in real-time applications. We hope that the results of this research can contribute to the development of object detection technology in images in the world of computer vision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.