All thanks to Almighty ALLAH, the creator and the owner of this universe, the most merciful, beneficent and the most gracious, who provided us guidance, strength and abilities to complete the thesis.We are especially thankful to Dr. Jia Uddin, our thesis supervisor, for his help, guidance and support in completion of my project. We also are thankful to the BRAC University Faculty Staffs of the Computer Science & Engineering, who have been a light of guidance for us in the whole study period at BRAC University, particularly in building our base in education and enhancing our knowledge.Finally, we would like to express our sincere gratefulness to our beloved parents, brothers and sisters for their love, care and support. We are grateful to all of our friends who helped us directly or indirectly to complete our thesis. CONTENTS DECLARATION ………………………………………………………………………………………… iiACKNOWLEDGEMENTS……………………………………………………………………………… iii CONTENTS ……………..……………………………………………………………………………….. iv LIST OF FIGURES……………………………………………………………………………………… v LIST OF TABLES ……………………………………………………………………………………… viABSTRACT……………………………………………………………………………………………….
The emergence of biometric-based authentication using modern sensors on electronic devices has led to an escalated use of face recognition technologies. While these technologies may seem intriguing, they are accompanied by numerous implicit drawbacks. In this paper, we look into the problem of face anti-spoofing (FAS) on a frame level in an attempt to ameliorate the risks of face-spoofed attacks in biometric authentication processes. We employed a bi-directional feature pyramid network (BiFPN) that is used for convolutional multi-scaled feature extraction on the EfficientDet detection architecture, which is novel to the task of FAS. We further use these convolutional multi-scaled features in order to perform deep pixel-wise supervision. For all of our experiments, we performed evaluations across all major datasets and attained competitive results for the majority of the cases. Additionally, we showed that introducing an auxiliary self-supervision branch tasked with reconstructing the inputs in the frequency domain demonstrates an average classification error rate (ACER) of 2.92% on Protocol IV of the OULU-NPU dataset, which is significantly better than the currently available published works on pixel-wise face anti-spoofing. Moreover, following the procedures of prior works, we performed inter-dataset testing, which further consolidated the generalizability of the proposed models, as they showed optimum results across various sensors without any fine-tuning procedures.
Improving interoperability in contactless‐to‐contact fingerprint matching is a crucial factor for the mainstream adoption of contactless fingerphoto devices. However, matching contactless probe images against legacy contact‐based gallery images is very challenging due to the presence of heterogeneity between these domains. Moreover, unconstrained acquisition of fingerphotos produces perspective distortion. Therefore, direct matching of fingerprint features suffers severe performance degradation on cross‐domain interoperability. In this study, to address this issue, the authors propose a coupled adversarial learning framework to learn a fingerprint representation in a low‐dimensional subspace that is discriminative and domain‐invariant in nature. In fact, using a conditional coupled generative adversarial network, the authors project both the contactless and the contact‐based fingerprint into a latent subspace to explore the hidden relationship between them using class‐specific contrastive loss and ArcFace loss. The ArcFace loss ensures intra‐class compactness and inter‐class separability, whereas the contrastive loss minimises the distance between the subspaces for the same finger. Experiments on four challenging datasets demonstrate that our proposed model outperforms state‐of‐the methods and two top‐performing commercial‐off‐the‐shelf SDKs, that is, Verifinger v12.0 and Innovatrics. In addition, the authors also introduce a multi‐finger score fusion network that significantly boosts interoperability by effectively utilising the multi‐finger input of the same subject for both cross‐domain and cross‐sensor settings.
Early diagnosis of rice disease is important because it poses a considerable threat to agricultural productivity as well as the global food security of the world. It is challenging to obtain more reliable outcomes based on the percentage of RGB value using image processing outcomes for rice disease detections and classifications in the agricultural field. Machine learning, especially with a Convolutional Neural Network (CNN), is a great tool to overcome this problem. But the utilization of deep learning techniques often necessitates high-performance computing devices, costly GPUs and extensive machine infrastructure. As a result, this significantly raises the overall expenses for users. Therefore, the demand for smaller CNN models becomes particularly pronounced, especially in embedded systems, robotics and mobile applications. These domains require real-time performance and minimal computational overhead, making smaller CNN models highly desirable due to their lower computational cost. This paper introduces a novel CNN architecture which is comparatively small in size and promising in performance to predict rice leaf disease with moderate accuracy and lower time complexity. The CNN network is trained with processed images. The image processing is performed using segmentation and k-means clustering to remove background and green parts of affected images. This technique proposes to detect rice disease of rice brown spot, rice bacterial blight and leaf smut with reliable outcomes in disease classifications. The model is trained using an augmented dataset of 2700 images (60% data) and validated with 1200 images of disease-affected samples to identify rice disease in agricultural fields. The model is tested with 630 images (14% data); testing accuracy is 97.9%. The model is exported into a mobile application to introduce the real-life application of the outcome of this work. The model accuracy is compared to others work associated with this type of problem. It is found that the performance of the model and the application are satisfactory compared to other works related to this work. The over-all accuracy is notable, showing the reliability and dependability of this model to classify rice leaf diseases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.