Breast cancer is the leading cause of mortality in women. Early diagnosis of breast cancer can reduce the mortality rate. In the diagnosis, the mitotic cell count is an important biomarker for predicting the aggressiveness, prognosis, and grade of breast cancer. In general, pathologists manually examine histopathology images under high-resolution microscopes for the detection of mitotic cells. However, because of the minute differences between the mitotic and normal cells, this process is tiresome, time-consuming, and subjective. To overcome these challenges, artificial-intelligence-based (AI-based) techniques have been developed which automatically detect mitotic cells in the histopathology images. Such AI techniques accelerate the diagnosis and can be used as a second-opinion system for a medical doctor. Previously, conventional image-processing techniques were used for the detection of mitotic cells, which have low accuracy and high computational cost. Therefore, a number of deep-learning techniques that demonstrate outstanding performance and low computational cost were recently developed; however, they still require improvement in terms of accuracy and reliability. Therefore, we present a multistage mitotic-cell-detection method based on Faster region convolutional neural network (Faster R-CNN) and deep CNNs. Two open datasets (international conference on pattern recognition (ICPR) 2012 and ICPR 2014 (MITOS-ATYPIA-14)) of breast cancer histopathology were used in our experiments. The experimental results showed that our method achieves the state-of-the-art results of 0.876 precision, 0.841 recall, and 0.858 F1-measure for the ICPR 2012 dataset, and 0.848 precision, 0.583 recall, and 0.691 F1-measure for the ICPR 2014 dataset, which were higher than those obtained using previous methods. Moreover, we tested the generalization capability of our technique by testing on the tumor proliferation assessment challenge 2016 (TUPAC16) dataset and found that our technique also performs well in a cross-dataset experiment which proved the generalization capability of our proposed technique.
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods.
Presently, lots of previous studies on biometrics employ convolutional neural networks (CNN) which requires a large amount of labeled training data. However, biometric data are considered as important personal information, and it is difficult to obtain large amounts of data due to individual privacy issues. Training with a small amount of data is a major cause of overfitting and low testing accuracy. To resolve this problem, previous studies have performed data augmentation that are based on geometric transforms and the adjustment of image brightness. Nevertheless, the data created by these methods have high correlation with the original data, and they cannot adequately reflect individual diversities. To resolve this problem, this study proposes iris image augmentation based on a conditional generative adversarial network (cGAN), as well as a method for improving recognition performance that uses this augmentation method. In our method, normalized iris images that are generated through arbitrary changes in the iris and pupil coordinates are used as input in the cGAN-based model to generate iris images. Due to the limitations of the cGAN model, data augmentation, which uses the periocular region, was found to fail with regard to the improvement of performance. Based on this information, only the iris region was used as input for the cGAN model. The augmentation method proposed in this paper was tested using NICE.II training dataset (selected from UBIRS.v2), MICHE database, and CASIA-Iris-Distance database. The results showed that the recognition performance was improved compared to existing studies.
Chrysanthemum zawadskii var. latilobum (CZ) has been used for beverage or tea and also as folk medicine for the remedy of diverse inflammatory diseases. Nevertheless, the therapeutic effect of CZ on arthritis remains to be unknown. In this paper we aim to investigate the CZ's antiarthritic effect and mechanism of action both in vitro and in vivo. To assess CZ's antiarthritic effect, mouse models of type II collagen-induced arthritis (CIA) were used. Mice were used to gauge clinical arthritis index and histopathological changes. Reverse transcriptase-polymerase chain reaction (RT-PCR), western blotting, electrophoretic mobility shift assay (EMSA), and other biological methods were adopted to measure CZ's effect on arthritis and to understand the veiled mechanism of action. CZ greatly suppressed CIA, histopathological score, bone erosion, and osteoclast differentiation. Mechanistically, CZ inhibited the production of various inflammatory and arthritic mediators like inflammatory cytokines, matrix metalloproteinases (MMPs), and chemokines. Of note, CZ significantly suppressed the activation of the NF-κB pathway in vivo. CZ exerted an antiarthritic effect in CIA mice by curbing the production of crucial inflammatory and arthritis mediators. This study warrants further investigation of CZ for the use in human rheumatoid arthritis (RA).
Age estimation using face images has been widely employed across various fields. Because the characteristics of face images usually vary greatly depending on race, camera type, lighting, and other environmental factors, the recognition ability of untrained heterogeneous face image databases is not accurate by previous methods. Therefore, various attempts have been made where different heterogeneous databases were combined to enable training; however, the training time is extended and diverse environmental variables in databases cannot be sufficiently trained. To address these issues, this study proposes modified cycleconsistent generative adversarial network (CycleGAN) that generates an even distribution of heterogeneous face data, and an age-estimation method that is effective for heterogeneous data based on comparative CNN for age estimation (CCNNAE). In addition, we propose the method of reducing the errors caused by image transformation in modified CycleGAN through adaptive selection between transformed and original face images for the input to CCNNAE, which is based on age similarity between the transformed face image and the original face image. Experiments with two open databases, MORPH and MegaAge databases showed that our method outperformed the state-of-the art methods. INDEX TERMS Age estimation, heterogeneous database, modified CycleGAN, CCNNAE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.