2022
DOI: 10.1021/acsphotonics.2c00932
|View full text |Cite
|
Sign up to set email alerts
|

Virtual Stain Transfer in Histology via Cascaded Deep Neural Networks

Abstract: Pathological diagnosis relies on the visual inspection of histologically stained thin tissue specimens, where different types of stains are applied to bring contrast to and highlight various desired histological features. However, the destructive histochemical staining procedures are usually irreversible, making it very difficult to obtain multiple stains on the same tissue section. Here, we demonstrate a virtual stain transfer framework via a cascaded deep neural network (C-DNN) to digitally transform hematox… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 38 publications
(69 reference statements)
0
7
0
Order By: Relevance
“…Various network structures have been reported for virtual staining, among which the generative adversarial network 77 (GAN) is one of the most commonly and widely used frameworks due to its strong representation capability 18,[20][21][22][23]31,32,[36][37][38]48,51,56,59,78,79 . Compared to non-GAN-based inference models, GANs can generate relatively higher resolution and perceptually more realistic images 13,14,59 .…”
Section: Network Architecture and Training Strategiesmentioning
confidence: 99%
See 3 more Smart Citations
“…Various network structures have been reported for virtual staining, among which the generative adversarial network 77 (GAN) is one of the most commonly and widely used frameworks due to its strong representation capability 18,[20][21][22][23]31,32,[36][37][38]48,51,56,59,78,79 . Compared to non-GAN-based inference models, GANs can generate relatively higher resolution and perceptually more realistic images 13,14,59 .…”
Section: Network Architecture and Training Strategiesmentioning
confidence: 99%
“…However, in the standard GAN framework where the Generator is solely optimized by an adversarial loss, the resulting Generator only mimics the colors and patterns of the target images without learning the underlying correspondence between the input and the target images, resulting in severe hallucinations at the micro-scale 19 . To overcome this hallucination problem, various other pixel-wise loss functions, such as mean absolute error (MAE) 18,21,22,32,36,37,51,56,59 , mean square error (MSE) 18,79 , SSIM 31,82 , Huber loss 31 , reversed Huber loss 23 , and color distance metrics 56 are incorporated into the Generator loss terms (in addition to the Discriminator loss) to regularize the GAN training; these additional loss terms are calculated using the virtually generated images and their corresponding ground truth (histochemically stained images). Moreover, image regularization terms such as total variation 83 were also exploited in some works to eliminate or suppress different types of image artifacts created by the Generator 18,[20][21][22]31 .…”
Section: Network Architecture and Training Strategiesmentioning
confidence: 99%
See 2 more Smart Citations
“…Meanwhile, specificity assignment (automatic identification of various organelles) is also necessary. Among machine learning approaches, using deep learning to give specificity to label-free imaging is an attractive option since deep learning has achieved wide-ranging success in microscopy, including enabling super-resolution, digital staining of tissues, , image restoration, among other applications. , However, the majority of segmentation of label-free image are limited to the whole cell level for the purposes of counting, classification, dry mass measurement, etc. Thus, individual organelles are not resolved.…”
Section: Introductionmentioning
confidence: 99%