Supervised deep learning techniques have achieved great success in various fields due to getting rid of the limitation of handcrafted representations. However, most previous image retargeting algorithms still employ fixed design principles such as using gradient map or handcrafted features to compute saliency map, which inevitably restricts its generality. Deep learning techniques may help to address this issue, but the challenging problem is that we need to build a large-scale image retargeting dataset for the training of deep retargeting models. However, building such a dataset requires huge human efforts.In this paper, we propose a novel deep cyclic image retargeting approach, called Cycle-IR, to firstly implement image retargeting with a single deep model, without relying on any explicit user annotations. Our idea is built on the reverse mapping from the retargeted images to the given input images. If the retargeted image has serious distortion or excessive loss of important visual information, the reverse mapping is unlikely to restore the input image well. We constrain this forward-reverse consistency by introducing a cyclic perception coherence loss. In addition, we propose a simple yet effective image retargeting network (IRNet) to implement the image retargeting process. Our IRNet contains a spatial and channel attention layer, which is able to discriminate visually important regions of input images effectively, especially in cluttered images. Given arbitrary sizes of input images and desired aspect ratios, our Cycle-IR can produce visually pleasing target images directly. Extensive experiments on the standard RetargetMe dataset show the superiority of our Cycle-IR. In addition, our Cycle-IR outperforms the Multiop method and obtains the best result in the user study. Code is available at https://github.com/mintanwei/Cycle-IR.
Fluorescence microscopy image restoration (FMIR) has received wide attention in the life science field and led to significant progress, benefiting from the deep learning (DL) technology. However, most of the current DL-based FMIR methods need to train a task-specific deep model from scratch on a specific dataset for each FMIR problem, such as super-resolution (SR), denoising, isotropic reconstruction, projection, volume reconstruction, etc. The performance and practicability of these FMIR models are limited due to the troublesome training, the difficulty in obtaining high-quality training images, and the limited generalization ability. Nowadays, the pre-trained foundation models have obtained significant breakthroughs in computer vision (CV) and natural language processing (NLP), demonstrating the powerful effect of the pre-training and fine-tuning paradigm. Here, inspired by the huge success of the pre-trained foundation models in the artificial intelligence (AI), we provide a universal solution for different FMIR problems by presenting a unified FMIR foundation model (UniFMIR), achieving higher image precision, better generalization performance, efficient and low-cost training of the task-specific model. The experimental results on five FMIR tasks and nine datasets, covering a wide range of fluorescence microscopy imaging modalities and biological samples, demonstrate the strong capability of the UniFMIR to handle various FMIR situations with a single model. The UniFMIR, pre-trained on the large-scale dataset we collected, can effectively transfer the knowledge learned during the pre-training to a specific FMIR situation by fine-tuning and can obtain a significant performance improvement, uncovering clear nanoscale cell structures and facilitating high-quality imaging in live samples. This work first explores the potential of applying the foundation model for FMIR. We hope to provide some inspiration for more researchers to further explore the DL-based FMIR and to trigger new research highlights of the FMIR model pre-training and development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.