Human recognition systems based on biometrics are much in demand due to increasing concerns of security and privacy. The human ear is unique and useful for recognition. It offers numerous advantages over popular biometrics traits face, iris, and fingerprints. A lot of work has been attributed to ear biometric, and the existing methods have achieved remarkable success over constrained databases. However, in unconstrained environment, a significant level of difficulty is observed as the images experience various challenges. In this paper, we first have provided a comprehensive survey on ear biometric using a novel taxonomy. The survey includes in-depth details of databases, performance evaluation parameters, and existing approaches. We have introduced a new database, NITJEW, for evaluation of unconstrained ear detection and recognition. A modified deep learning models Faster-RCNN and VGG-19 are used for ear detection and ear recognition tasks, respectively. The benchmark comparative assessment of our database is performed with six existing popular databases. Lastly, we have provided insight into open-ended research problems worth examining in the near future. We hope that our work will be a stepping stone for new researchers in ear biometrics and helpful for further development.
Recently there is an emerging trend in the research to recognize handwritten characters and numerals of many Indian languages and scripts. In this manuscript we have practiced the recognition of handwritten Gurmukhi numerals. We have used three different feature sets. First feature set is comprised of distance profiles having 128 features. Second feature set is comprised of different types of projection histograms having 190 features. Third feature set is comprised of zonal density and Background Directional Distribution (BDD) forming 144 features. The SVM classifier with RBF (Radial Basis Function) kernel is used for classification. We have obtained the 5-fold cross validation accuracy as 99.2% using second feature set consisting of 190 projection histogram features. On third and first feature sets recognition rates 99.13% and 98% are observed. To obtain better results pre-processing of noise removal and normalization processes before feature extraction are recommended, which are also practiced in our approach.
The ever-growing population of this world needs more food production every year. The loss caused in crops due to weeds is a major issue for the upcoming years. This issue has attracted the attention of many researchers working in the field of agriculture. There have been many attempts to solve the problem by using image classification techniques. These techniques are attracting researchers because they can prevent the use of herbicides in the fields for controlling weed invasion, reducing the amount of time required for weed control methods. This article presents use of images and deep learning-based approach for classifying weeds and crops into their respective classes. In this paper, five pre-trained convolution neural networks (CNN), namely ResNet50, VGG16, VGG19, Xception, and MobileNetV2, have been used to classify weed and crop into their respective classes. The experiments have been done on V2 plant seedling classification dataset. Amongst these five models, ResNet50 gave the best results with 95.23% testing accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.