Image dehazing aims to recover the uncorrupted content from a hazy image. Instead of leveraging traditional lowlevel or handcrafted image priors as the restoration constraints, e.g., dark channels and increased contrast, we propose an end-to-end gated context aggregation network to directly restore the final haze-free image. In this network, we adopt the latest smoothed dilation technique to help remove the gridding artifacts caused by the widely-used dilated convolution with negligible extra parameters, and leverage a gated sub-network to fuse the features from different levels. Extensive experiments demonstrate that our method can surpass previous state-of-the-art methods by a large margin both quantitatively and qualitatively. In addition, to demonstrate the generality of the proposed method, we further apply it to the image deraining task, which also achieves the state-of-the-art performance. Code has been made available at https://github.com/cddlyf/GCANet.
Figure 1: Colorization results of black-and-white photographs. Our method provides the capability of generating multiple plausible colorizations by giving different references. Input images (from left to right, top to bottom): Leroy Skalstad/pixabay, Peter van der Sluijs/wikimedia, AbstractWe propose the first deep learning approach for exemplar-based local colorization. Given a reference color image, our convolutional neural network directly maps a grayscale image to an output colorized image. Rather than using hand-crafted rules as in traditional exemplar-based methods, our end-to-end colorization network learns how to select, propagate, and predict colors from the large-scale data. The approach performs robustly and generalizes well even when using reference images that are unrelated to the input grayscale image. More importantly, as opposed to other learning-based colorization methods, our network allows the user to achieve customizable results by simply feeding different references. In order to further reduce manual effort in selecting the references, the system automatically recommends references with our proposed image retrieval algorithm, which considers both semantic and luminance information. The colorization can be performed fully automatically by simply picking the top reference suggestion. Our approach is validated through a user study and favorable quantitative comparisons to the-state-of-the-art methods. Furthermore, our approach can be naturally extended to video colorization. Our code and models will be freely available for public use.
This paper presents the first end-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remaining faithful to the reference style. To address this issue, we introduce a recurrent framework that unifies the semantic correspondence and color propagation steps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulated propagation errors. Video frames are colorized in sequence based on the colorization history, and its coherency is further enforced by the temporal consistency loss. All of these components, learned end-to-end, help produce realistic videos with good temporal stability. Experiments show our result is superior to the state-of-the-art methods both quantitatively and qualitatively.
SourcePitie et al. [2005] Luan et al. [2017] Refs ("street autumn") Our result ("street autumn")Reference Liao et al. [2017] Our result Refs ("street sakura") Our result ("street sakura") Fig. 1. Our method leverages semantically-meaningful dense correspondences between images, thus achieving a more accurate object-to-object color transfer than other methods (left). Moreover, our method can be successfully extended to multiple references (right). Input images: Bill Damon (Source) and PicsWalls.com (Reference).We propose a new algorithm for color transfer between images that have perceptually similar semantic structures. We aim to achieve a more accurate color transfer that leverages semantically-meaningful dense correspondence between images. To accomplish this, our algorithm uses neural representations for matching. Additionally, the color transfer should be spatially variant and globally coherent. Therefore, our algorithm optimizes a local linear model for color transfer satisfying both local and global constraints. Our proposed approach jointly optimizes matching and color transfer, adopting a coarse-to-fine strategy. The proposed method can be successfully extended from "one-to-one" to "one-to-many" color transfer. The latter further addresses the problem of mismatching elements of the input image. We validate our proposed method by testing it on a large variety of image content.
Abstract. The incidence of thyroid cancer has recently experienced a rapid increase in China, and papillary thyroid carcinoma (PTC) accounts for nearly 80% of human thyroid cancers. In the present study, the differential expression of microRNAs (miRNAs) and their target genes were identified in order to analyze the potential roles of miRNAs as biomarkers and in papillary thyroid carcinogenesis. One hundred and twenty-six PTC samples were collected from patients at the China-Japan Union Hospital, China, and the gene/miRNA expression profiles were examined with Illumina BeadChips and verified by real-time RT-PCR. Gene Ontology (GO) categories were determined, and pathway analysis was carried out using KEGG. miRNA target genes were predicted by implementing three computational analysis programs: TargetScanS, DIANA-microT and PicTar. Two hundred and forty-eight miRNAs and 3,631 genes were found to be significantly deregulated (gene, P<0.05; miRNA, P<0.01) in PTC tissues when compared with their matching normal thyroid tissues. hsa-miR-206 (target gene, MET), hsa-miR-299-3p (target gene, ITGAV), hsa-miR-101 (target gene, ITGA3), hsa-miR-103 (target gene, ITGA2), hsa-miR-222 (target genes, KIT and AXIN2), hsa-miR-15a (target genes, AXIN2 and FOXO1) and hsa-miR-221 (target gene, KIT) were identified. Together with the functions of the target genes, we further elucidated the role of miRNAs in papillary thyroid carcinogenesis and suggest the use of miRNAs as biomarkers for early diagnosis. Our findings provide the basis for future studies in the field of miRNA-based cancer therapy.
Traditional technologies for the recycling of spent lithium-ion batteries (LIBs) mainly focus on reductive leaching, which often leads to total leaching rather than selective leaching of metals. As a result, loss of valuable metal ions, particularly Li+, occurs in subsequent extraction processes, causing low recycling efficiency of valuable metals. Inspired by the oxide-delithiation process in materials science, here, advanced oxidation processes (AOPs) are first introduced to selectively recover Li from spent LIBs during hydrometallurgical leaching (oxidative leaching), and a high Li recovery rate is achieved with an extremely high slurry density. In AOPs, the sulfate radical (SO4 •–) and hydroxyl radical (HO•), which have high oxidation potentials, are in situ generated by heat-activated persulfate to prevent the leaching of Co2+ and Mn2+ and, simultaneously, promote the leaching of Li. Besides, chemical leaching processes are coupled with AOPs to enhance the leaching of Li for the incomplete delithiation of AOPs. Through the selective recovery, the extraction process of Li is drastically shortened. A lithium-rich solution (18.2 g/L of Li+), which is available to directly prepare qualified lithium products, can be obtained in only two steps. The reaction mechanisms between AOPs and spent LIBs are also comprehensively investigated. In the end, the loss of Li is only 2.06% in the purification processes, leading to a high recycling efficiency of Li. Li2CO3 with a purity of 99.0% was obtained. Furthermore, the introduction of AOPs for selective extraction of metals will not only show its significant value in the waste recycling field but also in the mineral resource utilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.