Abstract:Biometric systems use scanners to verify the identity of human beings by measuring the patterns of their behavioral or physiological characteristics. Some biometric systems are contactless and do not require direct touch to perform these measurements; others, such as fingerprint verification systems, require the user to make direct physical contact with the scanner for a specified duration for the biometric pattern of the user to be properly read and measured. This may increase the possibility of contamination… Show more
“…Both medical images and doctor-patient dialogue are tools for physicians to know their patients. For patients with a certain type of disease, their disease characteristics are statistical characteristics [44] , [45] , [46] . For example, as shown in Fig.…”
The sudden increase in coronavirus disease 2019 (COVID-19) cases puts high pressure on healthcare services worldwide. At this stage, fast, accurate, and early clinical assessment of the disease severity is vital. In general, there are two issues to overcome: (1) Current deep learning-based works suffer from multimodal data adequacy issues; (2) In this scenario, multimodal (e.g., text, image) information should be taken into account together to make accurate inferences. To address these challenges, we propose a multi-modal knowledge graph attention embedding for COVID-19 diagnosis. Our method not only learns the relational embedding from nodes in a constituted knowledge graph but also has access to medical knowledge, aiming at improving the performance of the classifier through the mechanism of medical knowledge attention. The experimental results show that our approach significantly improves classification performance compared to other state-of-the-art techniques and possesses robustness for each modality from multi-modal data. Moreover, we construct a new COVID-19 multi-modal dataset based on text mining, consisting of 1393 doctor-patient dialogues and their 3706 images (347 X-ray
2598 CT
761 ultrasound) about COVID-19 patients and 607 non-COVID-19 patient dialogues and their 10754 images (9658 X-ray
494 CT
761 ultrasound), and the fine-grained labels of all. We hope this work can provide insights to the researchers working in this area to shift the attention from only medical images to the doctor-patient dialogue and its corresponding medical images.
“…Both medical images and doctor-patient dialogue are tools for physicians to know their patients. For patients with a certain type of disease, their disease characteristics are statistical characteristics [44] , [45] , [46] . For example, as shown in Fig.…”
The sudden increase in coronavirus disease 2019 (COVID-19) cases puts high pressure on healthcare services worldwide. At this stage, fast, accurate, and early clinical assessment of the disease severity is vital. In general, there are two issues to overcome: (1) Current deep learning-based works suffer from multimodal data adequacy issues; (2) In this scenario, multimodal (e.g., text, image) information should be taken into account together to make accurate inferences. To address these challenges, we propose a multi-modal knowledge graph attention embedding for COVID-19 diagnosis. Our method not only learns the relational embedding from nodes in a constituted knowledge graph but also has access to medical knowledge, aiming at improving the performance of the classifier through the mechanism of medical knowledge attention. The experimental results show that our approach significantly improves classification performance compared to other state-of-the-art techniques and possesses robustness for each modality from multi-modal data. Moreover, we construct a new COVID-19 multi-modal dataset based on text mining, consisting of 1393 doctor-patient dialogues and their 3706 images (347 X-ray
2598 CT
761 ultrasound) about COVID-19 patients and 607 non-COVID-19 patient dialogues and their 10754 images (9658 X-ray
494 CT
761 ultrasound), and the fine-grained labels of all. We hope this work can provide insights to the researchers working in this area to shift the attention from only medical images to the doctor-patient dialogue and its corresponding medical images.
“…Damer et al [7] study the effect of wearing face mask on face recognition systems and found significant drop in recognition accuracy when subject is wearing face mask. Finger print authentication can be a solution but it requires to touch the surface of the scanner which may increase the possibility of contamination and spread of any infectious diseases [18]. Hence in this critical scenario of COVID 19, this solution cannot be acceptable when every automated system needs to work on contact less approach.…”
With the onset of COVID-19 pandemic, wearing of face mask became essential and the face occlusion created by the masks deteriorated the performance of the face biometric systems. In this situation, the use of periocular region (region around the eye) as a biometric trait for authentication is gaining attention since it is the most visible region when masks are used. One important issue in periocular biometrics is the identification of an optimal size periocular ROI which contains enough features for authentication. The state of the art ROI extraction algorithms use fixed size rectangular ROI calculated based on some reference points like center of the iris or centre of the eye without considering the shape of the periocular region of an individual. This paper proposes a novel approach to extract optimum size periocular ROIs of two different shapes (polygon and rectangular) by using five reference points (inner and outer canthus points, two end points and the midpoint of eyebrow) in order to accommodate the complete shape of the periocular region of an individual. The performance analysis on UBIPr database using CNN models validated the fact that both the proposed ROIs contain enough information to identify a person wearing face mask.
“…Especially in multi-user applications, hygienic concerns lower the acceptability of contact-based fingerprint systems and hence limit their deployment. In a comprehensive study, Okereafor et al [ 1 ] analyzed the risk of an infection by contact-based fingerprint recognition schemes and the hygienic concerns of their users. The authors concluded that contact-based fingerprint recognition carries a high risk of an infection if a previous user has contaminated the capturing device surface, e.g., with the SARS-CoV-2 virus.…”
This work presents an automated contactless fingerprint recognition system for smartphones. We provide a comprehensive description of the entire recognition pipeline and discuss important requirements for a fully automated capturing system. In addition, our implementation is made publicly available for research purposes. During a database acquisition, a total number of 1360 contactless and contact-based samples of 29 subjects are captured in two different environmental situations. Experiments on the acquired database show a comparable performance of our contactless scheme and the contact-based baseline scheme under constrained environmental influences. A comparative usability study on both capturing device types indicates that the majority of subjects prefer the contactless capturing method. Based on our experimental results, we analyze the impact of the current COVID-19 pandemic on fingerprint recognition systems. Finally, implementation aspects of contactless fingerprint recognition are summarized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.