Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. Motivated by this fact, we consider a Convolutional Neural Network(MSR-net) that directly learns an end-to-end mapping between dark and bright images. Different fundamentally from existing approaches, low-light image enhancement in this paper is regarded as a machine learning problem. In this model, most of the parameters are optimized by back-propagation, while the parameters of traditional models depend on the artificial setting. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods from the qualitative and quantitative perspective.
BackgroundMetabolic syndrome traits play an important role in the development of colorectal cancer. Adipokines, key metabolic syndrome cellular mediators, when abnormal, may induce carcinogenesis.Methodology/Principal FindingsTo investigate whether polymorphisms of important adipokines, adiponectin (ADIPOQ) and its receptors, either alone or in combination with environmental factors, are implicated in colorectal cancer, a two-stage case-control study was conducted. In the first stage, we evaluated 24 tag single nucleotide polymorphisms (tag SNPs) across ADIPOQ ligand and two ADIPOQ receptors (ADIPOR1 and ADIPOR2) among 470 cases and 458 controls. One SNP with promising association was then analyzed in stage 2 among 314 cases and 355 controls. In our study, ADIPOQ rs1063538 was consistently associated with increased colorectal cancer risk, with an odds ratio (OR) of 1.94 (95%CI: 1.48–2.54) for CC genotype compared with TT genotype. In two-factor gene-environment interaction analyses, rs1063538 presented significant interactions with smoking status, family history of cancer and alcohol use, with ORs of 4.52 (95%CI: 2.78–7.34), 3.18 (95%CI: 1.73–5.82) and 1.97 (95%CI: 1.27–3.04) for smokers, individuals with family history of cancer or drinkers with CC genotype compared with non-smokers, individuals without family history of cancer or non-drinkers with TT genotype, respectively. Multifactor gene-environment interactions analysis revealed significant interactions between ADIPOQ rs1063538, ADIPOR1 rs1539355, smoking status and BMI. Individuals carrying one, two and at least three risk factors presented 1.18–fold (95%CI:0.89–fold to 1.58–fold), 1.87–fold (95%CI: 1.38–fold to2.54–fold) and 4.39–fold (95%CI: 2.75–fold to 7.01–fold) increased colorectal cancer risk compared with those who without risk factor, respectively (P trend <0.0001).Conclusions/SignificanceOur results suggest that variants in ADIPOQ may contribute to increased colorectal cancer risk in Chinese and this contribution may be modified by environmental factors, such as smoking status, family history of cancer and BMI.
Background: Impaired function of masticatory muscles will lead to trismus. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model with a commercial atlas-based model for fast autosegmentation of the masticatory muscles on head and neck computed tomography (CT) images. Material and methods: Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n = 27) and validation (n = 29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning autosegmentation (DLAS) and atlas-based auto-segmentation (ABAS). The automatic algorithms were evaluated using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). A consolidated score was calculated by normalizing the metrics against interobserver variability and averaging over all patients. Differences in dose (ΔDose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods. Results: DLAS outperformed ABAS in delineating all MMs (p < 0.05). The DLAS mean DSC for M, T, MP, and LP ranged from 0.83 ± 0.03 to 0.89 ± 0.02, the ABAS mean DSC ranged from 0.79 ± 0.05 to 0.85 ± 0.04. The mean value for recall, HD, HD95, MSD also improved with DLAS for auto-segmentation. Interobserver variation revealed the highest variability in DSC and MSD for both T and MP, and the highest scores were achieved for T by both automatic algorithms. With few exceptions, the mean ΔD98%, ΔD95%, ΔD50%, and ΔD2% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P > 0.05). DLAS based contours had dose endpoints more closely matched with that of the manually segmented when compared with ABAS. Conclusions: DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.