2020
DOI: 10.1109/tmi.2020.2968504
|View full text |Cite
|
Sign up to set email alerts
|

Global Pixel Transformers for Virtual Staining of Microscopy Images

Abstract: Visualizing the details of different cellular structures is of great importance to elucidate cellular functions. However, it is challenging to obtain high quality images of different structures directly due to complex cellular environments. Fluorescence staining is a popular technique to label different structures but has several drawbacks. In particular, label staining is time consuming and may affect cell morphology, and simultaneous labels are inherently limited. This raises the need of building computation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(25 citation statements)
references
References 32 publications
(40 reference statements)
0
25
0
Order By: Relevance
“…The self-attention mechanism [24], [33] has achieved great success in various domains, including natural language processing [34], [35] and computer vision [21], [22], [36]. Based upon the self-attention mechanism, we propose a novel method, known as the feature augmentor (FA), to improve cleft feature representations.…”
Section: A Feature Augmentationmentioning
confidence: 99%
See 3 more Smart Citations
“…The self-attention mechanism [24], [33] has achieved great success in various domains, including natural language processing [34], [35] and computer vision [21], [22], [36]. Based upon the self-attention mechanism, we propose a novel method, known as the feature augmentor (FA), to improve cleft feature representations.…”
Section: A Feature Augmentationmentioning
confidence: 99%
“…Next, we consider the query tensor Q in our proposed FA. Existing studies [21], [22], [36] use the same strategies to generate Q as K and V, making it to be input-dependent. In this work, we instead formulate Q ∈ R dq×hq×wq×cq as a learnable tensor containing free parameters, where d q , h q , w q , and c q denote the depth, height, width, and the number of feature maps, respectively.…”
Section: A Feature Augmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Oktay et al [ 22 ] incorporated prior anatomical knowledge into CNNs through a novel regularization model. Since fusion can improve the performance in many ways [ 23 , 24 , 25 ], Liu et al [ 26 ] proposed a novel network layer that effectively fuses the global information from the input, and a novel multi-scale input strategy that acquires multi-scale features. Li et al [ 27 ] proposed a novel 3D self-attention CNN for the low-dose CT denoising problem; the structure acquired more spatial information.…”
Section: Related Workmentioning
confidence: 99%