2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00913
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images

Abstract: Mobile memory capacity (a) Best performance achievable Mobile memory capacity (b) Performance trained on global image Mobile memory capacity (c) Performance trained on local patchesFigure 1: Inference memory and mean intersection over union (mIoU) accuracy on the DeepGlobe dataset [1]. (a): Comparison of best achievable mIoU v.s. memory for different segmentation methods. (b): mIoU/memory with different global image sizes (downsampling rate shown in scale annotations). (c): mIoU/memory with different local pat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
115
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 131 publications
(116 citation statements)
references
References 35 publications
1
115
0
Order By: Relevance
“…Most of these deep learning methods have only been tested on images with low to medium resolution of up to a few megapixels. Chen et al [15] regarded images with resolution up to 30 million pixels as ultra-high resolution images. They proposed a method that integrates both global downsampled images and local patches while they only tested their methods in images with resolution up to 30 million pixels.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Most of these deep learning methods have only been tested on images with low to medium resolution of up to a few megapixels. Chen et al [15] regarded images with resolution up to 30 million pixels as ultra-high resolution images. They proposed a method that integrates both global downsampled images and local patches while they only tested their methods in images with resolution up to 30 million pixels.…”
Section: Related Workmentioning
confidence: 99%
“…However, the average resolution of our WSIs exceeds 2 gigapixels. Downsampling used in [15] cannot be directly transferred to our case as we have only 30 WSIs with proper annotations, which indicates the shortage of training data, while the datasets used in their experiments consist of more than 2,000 images. Besides, we need to preserve minute details of pathologies (such as plaques) and cells in our WSIs while downsampling will unavoidably lose these important features.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A considerable amount of work has attempted to alleviate the issue of subjectivity in tuning this trade-off by learning to merge multi-scale information, in both medical imaging [3] and computer vision [4,5]. These works optimise model performance by exploiting the information from multi-scale sources.…”
Section: Introductionmentioning
confidence: 99%
“…Another is [6], which is designed to work specifically with histopathology images. In general computer vision, the authors of [5] boost the learning efficiency of multi-scale information by enforcing global and local streams to interactively exchange information with each other. On the other hand, Chen et al [4] perform multi-scale feature aggregation via an attention mechanism, which weights the prediction score from multi-scale parallel networks.…”
Section: Introductionmentioning
confidence: 99%