Developing a breast cancer screening method is very important to facilitate early breast cancer detection and treatment. Building a screening method using medical imaging modality that does not cause body tissue damage (non-invasive) and does not involve physical touch is challenging. Thermography, a non-invasive and non-contact cancer screening method, can detect tumors at an early stage even under precancerous conditions by observing temperature distribution in both breasts. The thermograms obtained on thermography can be interpreted using deep learning models such as convolutional neural networks (CNNs). CNNs can automatically classify breast thermograms into categories such as normal and abnormal. Despite their demostrated utility, CNNs have not been widely used in breast thermogram classification. In this study, we aimed to summarize the current work and progress in breast cancer detection based on thermography and CNNs. We first discuss of breast thermography potential in early breast cancer detection, providing an overview of the availability of breast thermal datasets together with publicly accessible. We also discuss characteristics of breast thermograms and the differences between healthy and cancerous thermographic patterns. Breast thermogram classification using a CNN model is described step by step including a simulation example illustrating feature learning. We cover most research related to the implementation of deep neural networks for breast thermogram classification and propose future research directions for developing representative datasets, feeding the segmented image, assigning a good kernel, and building a lightweight CNN model to improve CNN performance. INDEX TERMS breast cancer; convolutional neural network; deep learning; early detection; thermogram
We propose two identification methods for JPEG-coded images. The purposes are to identify the images that are compressed from the same original image with various compression ratios in fast and robust manner. The first approach can avoid identification leakage or false negative (FN), and could result in a few false positives (FP). The second approach can avoid both FN and FP, with a slightly longer processing time. By combining the two schemes, a faster and a more perfect identification can be achieved, in which FN and FP can be avoided.
In recent years, cross spectral matching has been gaining attention in various biometric systems for identification and verification purposes. Cross spectral matching allows images taken under different electromagnetic spectrums to match each other. In cross spectral matching, one of the keys for successful matching is determined by the features used for representing an image. Therefore, the feature extraction step becomes an essential task. Researchers have improved matching accuracy by developing robust features. This paper presents most commonly selected features used in cross spectral matching. This survey covers basic concepts of cross spectral matching, visual and thermal features extraction, and state of the art descriptors. In the end, this paper provides a description of better feature selection methods in cross spectral matching.
Early detection of plant diseases is one of the main keys to handling diseases quickly and successfully. The purpose of this study is to find out a simpler CNN architecture and meet an acceptable compromise between accuracy and simplification to detect diseases in tomato plants based on leaf images. This simpler architecture will allow the development of standalone and independent system model in the field to classify and identify the tomato plants diseases in low price and limited resources. This proposed architecture was developed from the CNN architecture baseline and is intended to classify 10 classes of tomato leaves consist of one healthy class and 9 classes of leaves diseases taken from the Plant Village dataset. In this study, the performance of the proposed architecture and comparative architectures are examined in the same dataset. Comparative architectures used are some existing CNN architectures that are commonly used namely VGG Net, Shuffle Net and Squeeze Net. The results indicated that the proposed architecture can achieve competitive accuracy compared with the existing architecture while the proposed architecture is much shorter than the existing architecture and better in terms of performance time.
Cross-spectral iris recognition represents the ability of the system to identify the iris images acquired in different electromagnetic spectrums. An iris captured in the near-infrared spectrum (NIR) is matched with an iris obtained in the visual light spectrum (VIS) to boost the recognition performance. In cross-spectral iris recognition, the illumination factor between NIR and VIS images significantly degrades the recognition performance. Therefore, the existing method only achieved recognition performance with an equal error rate (EER) larger than 5%, and it is a challenging issue for cross-spectral performance to have EER below 5%. In this paper, we improve iris recognition performance by concatenating the Gradientfacesbased normalization technique (GRF) to a standard (conventional) iris recognition method to alleviate the illumination effect. In addition, we integrate the GRF with a Gabor filter, a difference of Gaussian (DoG) filter, and texture descriptors, namely a binary statistical image feature (BSIF) and a local binary pattern (LBP). The experimental results show that the GRF can boost the cross-spectral iris recognition performance with an EER equals to 1.69%. In addition, the best cross-spectral iris recognition performance is achieved when the GRF is integrated with the Gabor filter and the BSIF.
The designs of Islamic women apparels is dynamically changing, which can be shown by emerging of online shops selling clothing with fast updates of newest models. Traditionally, buying the clothes online can be done by querying the keywords to the retrieval system. The approach has a drawback that the keywords cannot describe the clothes designs precisely. Therefore, a searching based on content–known as content-based image retrieval (CBIR)–is required. One of the features used in CBIR is the shape. This article presents a new normalization approach to the Pyramid Histogram of Oriented Gradients (PHOG) as a mean for shape feature extraction of women Islamic clothing in a retrieval system. We refer to the proposed approach as normalized PHOG (NPHOG). The Euclidean distance measured the similarity of the clothing. The performance of the system was evaluated by using 340 clothing images, comprised of four clothing categories, 85 images for each category: blouse-pants, long dress, outerwear, and tunic. The recall and precision parameters measured the retrieval performance; the Histogram of Oriented Gradients (HOG) and PHOG were the methods for comparison. The experiments showed that NPHOG improved the HOG and PHOG performance in three clothing categories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.