2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00864
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Saliency Transformation Network: Incorporating Multi-stage Visual Cues for Small Organ Segmentation

Abstract: We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lack… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
201
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 196 publications
(201 citation statements)
references
References 36 publications
0
201
0
Order By: Relevance
“…Comparisons with State-of-the-Art Segmentation Algorithms Comparisons against state-of-the-art volumetric segmentation algorithms are reported in Table 1. According to output type, we classify them into three categories: 3D models which predict 3D probability maps directly (such as UNet-Patch [8] and UNet-Full [9]), 2D models which produce 2D segmentation results over slices in the axial view (such as FCN8s [5]), Pseudo-3D (P3D) models which fuse 2D segmentation results for axial, sagittal and coronal views (such as RSTN [11]). Our globally guided progressive fusion network (GGPFN) can be easily integrated into the 2D and P3D segmentation frameworks.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Comparisons with State-of-the-Art Segmentation Algorithms Comparisons against state-of-the-art volumetric segmentation algorithms are reported in Table 1. According to output type, we classify them into three categories: 3D models which predict 3D probability maps directly (such as UNet-Patch [8] and UNet-Full [9]), 2D models which produce 2D segmentation results over slices in the axial view (such as FCN8s [5]), Pseudo-3D (P3D) models which fuse 2D segmentation results for axial, sagittal and coronal views (such as RSTN [11]). Our globally guided progressive fusion network (GGPFN) can be easily integrated into the 2D and P3D segmentation frameworks.…”
Section: Resultsmentioning
confidence: 99%
“…Precision-recall curves and F-scores of 3D UNet-Patch [8], 3D UNet-Full [9], P3D FCN8s [12], P3D RSTN [11] and our final model P3D GGPFN are presented in Fig. 4.…”
Section: Precision-recall Curvesmentioning
confidence: 99%
See 2 more Smart Citations
“…Since the focus in this paper is how to combine g(X) and X in f (·), and also the two stages are executed separately, so the form of g(·) is out of range of this study and will be investigated in the future. In this paper we choose a recent stat-of-the-art segmentation framework [11] for g. Since g(·) is a 2D-based method so we need to concatenate the output of different slices to reconstruct the 3D volume like in [10]. We train the segmentation algorithm on X 1 and test it on X 2 .…”
Section: The Segmentation Stagementioning
confidence: 99%