2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00071
|View full text |Cite
|
Sign up to set email alerts
|

Deep Extreme Cut: From Extreme Points to Object Segmentation

Abstract: Figure 1. Example results of DEXTR: The user provides the extreme clicks for an object, and the CNN produces the segmented masks. AbstractThis paper explores the use of extreme points in an object (left-most, right-most, top, bottom pixels) as input to obtain precise object segmentation for images and videos. We do so by adding an extra channel to the image in the input of a convolutional neural network (CNN), which contains a Gaussian centered in each of the extreme points. The CNN learns to transform this in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
419
0
8

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 376 publications
(427 citation statements)
references
References 40 publications
0
419
0
8
Order By: Relevance
“…This is especially tricky to do for threedimensional objects where the user typically has to navigate three multi-planar reformatted views (axial, coronal, sagittal) in order to achieve the task. Recent studies have also shown the time savings using extreme point selection brings for 2D object selection instead of traditional bounding box selection [16,17]. At the same time, extreme points provide additional information to the segmentation model (which can be observed in our experimental section, Table 1.…”
Section: Extreme Point Selectionmentioning
confidence: 61%
See 1 more Smart Citation
“…This is especially tricky to do for threedimensional objects where the user typically has to navigate three multi-planar reformatted views (axial, coronal, sagittal) in order to achieve the task. Recent studies have also shown the time savings using extreme point selection brings for 2D object selection instead of traditional bounding box selection [16,17]. At the same time, extreme points provide additional information to the segmentation model (which can be observed in our experimental section, Table 1.…”
Section: Extreme Point Selectionmentioning
confidence: 61%
“…This extra channel includes 3D Gaussians G centered on each point location clicked by the user. This approach is similar to [16] but here we extended it to 3D medical imaging problems. Figure 1 illustrates our approach.…”
Section: Extreme Point Selectionmentioning
confidence: 99%
“…Interactive Segmentation. Recent advances in interactive segmentation (e.g., [1,38,2]) utilize neural networks to convert sparse human inputs into high quality segments. For novel domains without large-scale training data, blockannotated images can act as cost-efficient seed data to train these models.…”
Section: Compatibility With Existing Annotation Methodsmentioning
confidence: 99%
“…Matting and object selection [50,33,34,6,58,57,10,30,59] generate tight boundaries from loosely annotated boundaries or few inside/outside clicks and scribbles. [44,38] introduced a predictive method which automatically infers a foreground mask from 4 boundary clicks, and was extended to full-image segmentation in [2]. The number of boundary clicks was further reduced to as few as one by [1].…”
Section: Related Workmentioning
confidence: 99%
“…Contrary to the two stream approach, deep extreme cut [138] takes a single pipeline to create segmentation maps from RGB Images. This method expects 4 points from the user denoting the four extreme regions in the boundary of the object (leftmost, rightmost, topmost,bottommost).…”
Section: Deep Extreme Cutmentioning
confidence: 99%