The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers, yet its performance competes with the most complex classifiers in the literature. The core of this classifier depends mainly on measuring the distance or similarity between the tested examples and the training examples. This raises a major question about which distance measures to be used for the KNN classifier among a large number of distance and similarity measures available? This review attempts to answer this question through evaluating the performance (measured by accuracy, precision, and recall) of the KNN using a large number of distance measures, tested on a number of real-world data sets, with and without adding different levels of noise. The experimental results show that the performance of KNN classifier depends significantly on the distance used, and the results showed large gaps between the performances of different distances. We found that a recently proposed nonconvex distance performed the best when applied on most data sets comparing with the other tested distances. In addition, the performance of the KNN with this top performing distance degraded only *20% while the noise level reaches 90%, this is true for most of the distances used as well. This means that the KNN classifier using any of the top 10 distances tolerates noise to a certain degree. Moreover, the results show that some distances are less affected by the added noise comparing with other distances.
Class imbalance occurs in classification problems in which the ''normal'' cases, or instances, significantly outnumber the ''abnormal'' instances. Training a standard classifier on imbalanced data leads to predictive biases which cause poor performance on the class(es) with lower prior probabilities. The less frequent classes are often critically important events, such as system failure or the occurrence of a rare disease. As a result, the class imbalance problem has been considered to be of great importance for many years. In this paper, we propose a novel algorithm that utilizes the furthest neighbor of a candidate example to generate new synthetic samples. A key advantage of SOMTEFUNA over existing methods is that it does not have parameters to tune (such as K in SMOTE). Thus, it is significantly easier to utilize in real-world applications. We evaluate the benefit of resampling with SOMTEFUNA against state-of-the-art methods including SMOTE, ADASYN and SWIM using Naive Bayes and Support Vector Machine classifiers. Also, we provide a statistical analysis based on Wilcoxon Signed-rank test to validate the significance of the SMOTEFUNA results. The results indicate that the proposed method is an efficient alternative to the current methods. Specifically, SOMTEFUNA achieves better 5-fold cross validated ROC and precision-recall space performance.
Predicting the compressive strength of cement-stabilized rammed earth (CSRE) using current testing machines is time-consuming and costly and may harm the environment due to the samples’ waste. This paper presents an automatic method using computer vision and deep learning to solve the problem. For this purpose, a deep convolutional neural network (DCNN) model is proposed, which was evaluated on a new in-house scanning electron microscope (SEM) image database containing 4284 images of materials with different compressive strengths. The experimental results show reasonable prediction results compared to other traditional methods, achieving 84% prediction accuracy and a small (1.5) oot Mean Square Error (RMSE). This indicates that the proposed method (with some enhancements) can be used in practice for predicting the compressive strength of CSRE samples.
Research on content-based image retrieval (CBIR) has been under development for decades, and numerous methods have been competing to extract the most discriminative features for improved representation of the image content. Recently, deep learning methods have gained attention in computer vision, including CBIR. In this paper, we present a comparative investigation of different features, including low-level and high-level features, for CBIR. We compare the performance of CBIR systems using different deep features with state-of-the-art low-level features such as SIFT, SURF, HOG, LBP, and LTP, using different dictionaries and coefficient learning techniques. Furthermore, we conduct comparisons with a set of primitive and popular features that have been used in this field, including colour histograms and Gabor features. We also investigate the discriminative power of deep features using certain similarity measures under different validation approaches. Furthermore, we investigate the effects of the dimensionality reduction of deep features on the performance of CBIR systems using principal component analysis, discrete wavelet transform, and discrete cosine transform. Unprecedentedly, the experimental results demonstrate high (95% and 93%) mean average precisions when using the VGG-16 FC7 deep features of Corel-1000 and Coil-20 datasets IntroductionGiven a set of images S and an input image i, the goal of a content-based image retrieval (CBIR) system is to search S for i and return the most related/similar images to i, based on their contents. This emergent field responds to an urgent need to search for an image based on its content, rather than typing text to describe image content to be searched for. That is, CBIR systems allow users to conduct a query by image (QBI), and the system's task is to identify the images that are relevant to that image. Prior to CBIR, the traditional means of searching for images was typing a text describing the image content, known as query by text (QBT). However, QBT requires predefined image information, such as metadata, which necessitate human intervention to annotate images in order to describe their contents. This is unfeasible, particularly with the emergence of big data; for example, Flickr creates approximately 3.6 TB of image data, while Google deals with approximately 20,000 TB of data daily[1], which mostly comprise images and videos. Applications of CBIR are massive in terms of numbers and areas, which include, but are not limited to, medical image analysis [2], image mining[3][4][5], surveillance[6], biometrics[7], security[8][9][10], and remote sensing[11].The key to the success of a CBIR system lies in extracting features from an image to define its content. These features are stored to describe each image, which is implemented automatically by the system, using specific algorithms developed for the extraction process. Similarly, a query process is conducted by extracting the same features from the query image to determine the most similar images from a feature dataset, ...
A B S T R A C TCovering the face and all body parts, sometimes the only evidence to identify a person is their hand geometry, and not the whole hand-only two fingers (the index and the middle fingers) while showing the victory sign, as seen in many terrorists videos. This paper investigates for the first time a new way to identify persons, particularly (terrorists) from their victory sign. We have created a new database in this regard using a mobile phone camera, imaging the victory signs of 50 different persons over two sessions. Simple measurements for the fingers, in addition to the Hu Moments for the areas of the fingers were used to extract the geometric features of the shown part of the hand shown after segmentation. The experimental results using the KNN classifier were encouraging for most of the recorded persons; with about 40% to 93% total identification accuracy, depending on the features, distance metric and K used.
Facial image retrieval is a challenging task since faces have many similar features (areas), which makes it difficult for the retrieval systems to distinguish faces of different people. With the advent of deep learning, deep networks are often applied to extract powerful features that are used in many areas of computer vision. This paper investigates the application of different deep learning models for face image retrieval, namely, Alexlayer6, Alexlayer7, VGG16layer6, VGG16layer7, VGG19layer6, and VGG19layer7, with two types of dictionary learning techniques, namely K-means and K-SVD. We also investigate some coefficient learning techniques such as the Homotopy, Lasso, Elastic Net and SSF and their effect on the face retrieval system.The comparative results of the experiments conducted on three standard face image datasets show that the best performers for face image retrieval are Alexlayer7 with K-means and SSF, Alexlayer6 with K-SVD and SSF, and Alexlayer6 with K-means and SSF. The APR and ARR of these methods were further compared to some of the state of the art methods based on local descriptors. The experimental resultsshow that deep learning outperforms most of those methods and therefore can be recommended for use in practice of face image retrieval
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.