2022
DOI: 10.1109/tmi.2022.3194984
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Deep Learning Framework for ssTEM Image Restoration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…The authors also employ a multi-scale training strategy, dividing the training into two stages with progressively larger input images, first \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $256\times 256\times 32$\end{document} , and then \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $320\times 320\times 36$\end{document} . For pre-processing, the authors apply denoising using their own image restoration network [45] , which is trained on patches of size \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $256\times 256\times 3$\end{document} . During testing, coarse noisy regions in the test sets are manually selected and restored using the trained interpolation network before performing segmentation.…”
Section: Summary Of Segmentation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors also employ a multi-scale training strategy, dividing the training into two stages with progressively larger input images, first \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $256\times 256\times 32$\end{document} , and then \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $320\times 320\times 36$\end{document} . For pre-processing, the authors apply denoising using their own image restoration network [45] , which is trained on patches of size \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $256\times 256\times 3$\end{document} . During testing, coarse noisy regions in the test sets are manually selected and restored using the trained interpolation network before performing segmentation.…”
Section: Summary Of Segmentation Methodsmentioning
confidence: 99%
“…• VIDAR (USTC) 6 For pre-processing, the authors apply denoising using their own image restoration network [45], which is trained on patches of size 256 × 256 × 3. During testing, coarse noisy regions in the test sets are manually selected and restored using the trained interpolation network before performing segmentation.…”
Section: B Participants' Methodsmentioning
confidence: 99%
“…Moreover, they used a weighted binary cross-entropy loss function to compensate for the class imbalance and deployed a multi-scale training strategy to train the network in two stages with progressively larger input images. For pre-processing, denoising was performed with their own image restoration network [41]. Finally, the semantic masks and instance boundaries are used to create a seed map to perform hierarchical agglomeration [42] and extract individual instances.…”
Section: A Participants' Methodsmentioning
confidence: 99%