2019
DOI: 10.1101/848077
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised content-preserving transformation for optical microscopy

Abstract: The advent of deep learning and the open access to a substantial collection of imaging data provide a potential solution to computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current deep-learning implementations usually operate in a supervised manner, and the reliance on a laborious and error-prone data annotation procedure remains a barrier towards more general applicability. Here, we propose an unsupervised image transformation … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 54 publications
0
5
0
Order By: Relevance
“…To overcome this, amalgamation of images from different modality such as phase contrast or quantitative phase imaging 45 , preferably on the same microscope platform, may be particularly useful 46 . Additionally, it may be instructive to compare or integrate our approach with recent studies making use of deep learning to perform artificial fluorescent labelling 47 .…”
Section: Discussionmentioning
confidence: 99%
“…To overcome this, amalgamation of images from different modality such as phase contrast or quantitative phase imaging 45 , preferably on the same microscope platform, may be particularly useful 46 . Additionally, it may be instructive to compare or integrate our approach with recent studies making use of deep learning to perform artificial fluorescent labelling 47 .…”
Section: Discussionmentioning
confidence: 99%
“…The Adam optimizer was used to optimize network parameters (61). The initial learning rate is 0.0002, which decays linearly every 50 iterations with a rate of 0.99.…”
Section: Utom Methodsmentioning
confidence: 99%
“…To translate the label-free UV images into virtual H&E images, we apply a recently developed unsupervised content-preserving transformation for optical microscopy (UTOM) deep neural network (61). UTOM adapts the general framework of cycle-consistent generative adversarial networks (Cycle-GAN) which can transform images from one domain into another without requiring pixel-level paired data.…”
Section: Utom For Label-free Hande Colorization With Uv Microscopymentioning
confidence: 99%
See 1 more Smart Citation
“…3,5). We believe that integrating weakly-/semi-supervised data or introducing a saliency constraint 41 would help to address this problem, further improving the accuracy of Deep-CHAMP images. Virtual staining through unsupervised learning should be systematically investigated in the future to enable a faithful conversion, which however, is beyond the scope of this study.…”
Section: Figure 6 | Distributions Of Nuclear Features Extracted From Deep-champ and Clinical Standard Images A-cmentioning
confidence: 99%