. Significance: Oral cancer is a quite common global health issue. Early diagnosis of cancerous and potentially malignant disorders in the oral cavity would significantly increase the survival rate of oral cancer. Previously reported smartphone-based images detection methods for oral cancer mainly focus on demonstrating the effectiveness of their methodology, yet it still lacks systematic study on how to improve the diagnosis accuracy on oral disease using hand-held smartphone photographic images. Aim: We present an effective smartphone-based imaging diagnosis method, powered by a deep learning algorithm, to address the challenges of automatic detection of oral diseases. Approach: We conducted a retrospective study. First, a simple yet effective centered rule image-capturing approach was proposed for collecting oral cavity images. Then, based on this method, a medium-sized oral dataset with five categories of diseases was created, and a resampling method was presented to alleviate the effect of image variability from hand-held smartphone cameras. Finally, a recent deep learning network (HRNet) was introduced to evaluate the performance of our method for oral cancer detection. Results: The performance of the proposed method achieved a sensitivity of 83.0%, specificity of 96.6%, precision of 84.3%, and of 83.6% on 455 test images. The proposed “center positioning” method was about 8% higher than that of a simulated “random positioning” method in terms of score, the resampling method had additional 6% of performance improvement, and the introduced HRNet achieved slightly better performance than VGG16, ResNet50, and DenseNet169, with respect to the metrics of sensitivity, specificity, precision, and . Conclusions: Capturing oral images centered on the lesion, resampling the cases in training set, and using the HRNet can effectively improve the performance of deep learning algorithm on oral cancer detection. The smartphone-based imaging with deep learning method has good potential for primary oral cancer diagnosis.
Cracks are one of the most common categories of pavement distress that may potentially threaten road and highway safety. Thus, a reliable and efficient pixel-level method of crack detection is necessary for real-time measurement of the crack. However, many existing encoder-decoder architectures for crack detection are time-consuming because the part of decoder module always has lots of convolutional layers and feature channels that lead to performance that highly relies on computing resources, which is a handicap in scenarios with limited resources. In this study, we propose a simple and effective method to boost the algorithmic efficiency based on encoder-decoder architecture for crack detection. We develop a switch module, called SWM, to predict whether the image is positive or negative and then skip the decoder module to save computation time when it is negative. This method uses the encoder module as the fixed feature extractor and only needs to place a light-weight classifier head on the end of the encoder module to output the final class probability. We choose the classical UNet and DeepCrack as examples of the encoder-decoder architectures to show how SWM is integrated into the architectures to reduce computation complexity. Evaluations on the public CrackTree206 and AIMCrack datasets demonstrate that our method can significantly boost the efficiency of the encoder-decoder architectures in all tasks, while without affecting the performance. The SWM can also be easily embedded into other encoder-decoder architectures for further improvement. The source code is available at https://github.com/hanshenchen/crack-detection. INDEX TERMSComputer vision, deep learning, encoder-decoder architecture, crack detection.
Type 2 diabetes mellitus (T2DM) is considered as a metabolic disease with hyperglycemia. Accumulating investigations have explored the important role of hereditary factors for T2DM occurrence. Some functional microRNA (miR) polymorphisms may affect their interactions with target mRNAs and result in an aberrant expression. Thus, miR-variants might be considered as a biomarker of the susceptibility of T2DM. In this study, we recruited 502 T2DM cases and 782 healthy subjects. We selected miR-146a rs2910164 C>G, -196a2 rs11614913 T>C and -499 rs3746444 A>G loci and carried out an investigation to identify whether these miR- loci could influence T2DM occurrence. In this investigation, a Bonferroni correction was harnessed. After adjustment, we found that rs2910164 single nucleotide polymorphism (SNP) was a protective factor for T2DM (GG vs. CC/CG: adjusted P=0.010), especially in never drinking (GG vs. CC/CG: adjusted P=0.001) and ≥24 kg/m2 (GG vs. CC/CG: adjusted P=0.002) subgroups. We also identified that rs11614913 SNP was a protective factor for T2DM in smoking subjects (CC/TC vs. TT: adjusted P=0.002). When we analyzed an interaction of SNP-SNP with the susceptibility of T2DM. Rs11614913/rs3746444, Rs2910164/rs3746444, and rs11614913/rs2910164 combinations were not associated with the risk of T2DM. In summary, the present study highlights that rs2910164 SNP decreases a susceptibility of T2DM, especially in BMI ≥24 kg/m2 and never drinking subgroups. In addition, we also identify that rs11614913 C allele decreases the susceptibility of T2DM significantly in smoking subgroup.
Cracks are one of the most common types of surface defects that occur on various engineering infrastructures. Visual-based crack detection is a challenging step due to the variation of size, shape, and appearance of cracks. Existing convolutional neural network (CNN)-based crack detection networks, typically using encoder-decoder architectures, may suffer from loss of spatial resolution in the high-to-low and low-to-high resolution processes, affecting the accuracy of prediction. Therefore, we propose H R N e t e , an enhanced version of a high-resolution network (HRNet), by removing the downsampling operation in the initial stage, reducing the number of high-resolution representation layers, using dilated convolution, and introducing hierarchical feature integration. Experiments show that the proposed H R N e t e with relatively few parameters can achieve more accuracy and robust performance than other recent approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.