2020
DOI: 10.1007/978-3-030-58548-8_40
|View full text |Cite
|
Sign up to set email alerts
|

Pixel-Pair Occlusion Relationship Map (P2ORM): Formulation, Inference and Application

Abstract: We formalize concepts around geometric occlusion in 2D images (i.e., ignoring semantics), and propose a novel unified formulation of both occlusion boundaries and occlusion orientations via a pixel-pair occlusion relation. The former provides a way to generate large-scale accurate occlusion datasets while, based on the latter, we propose a novel method for task-independent pixel-level occlusion relationship estimation from single images. Experiments on a variety of datasets demonstrate that our method outperfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 56 publications
0
5
0
Order By: Relevance
“…III-A, the surface-pair occlusion relationship in 3D can be represented as the pixel-pair occlusion relationship in 2D. For each valid image pixel pair (q i , q j ), neural network P2ORNet [15] classifies the occlusion relationship status among three possible three status: q i occludes q j , q j occludes q i and no occlusion between (q i and q j ). The estimated image occlusion edge lies on the image region where P2ORNet predicts that occlusion exists between pixel pairs.…”
Section: B Occlusion Edge Extraction In Imagesmentioning
confidence: 99%
See 3 more Smart Citations
“…III-A, the surface-pair occlusion relationship in 3D can be represented as the pixel-pair occlusion relationship in 2D. For each valid image pixel pair (q i , q j ), neural network P2ORNet [15] classifies the occlusion relationship status among three possible three status: q i occludes q j , q j occludes q i and no occlusion between (q i and q j ). The estimated image occlusion edge lies on the image region where P2ORNet predicts that occlusion exists between pixel pairs.…”
Section: B Occlusion Edge Extraction In Imagesmentioning
confidence: 99%
“…3) Network Training: The P2ORNet is trained with a class-balanced cross-entropy loss [15], taking into account the low-class frequency of pixels on image occlusion edges (cf. Sec.…”
Section: B Occlusion Edge Extraction In Imagesmentioning
confidence: 99%
See 2 more Smart Citations
“…Structures have been successfully used in some learningbased methods [35,12,27,29,26]. Wang et al [35] propose to learn a Manhattan Label Map from the input RGB image and its corresponding Manhattan line map for normal esti- mation.…”
Section: Structure Guided Learningmentioning
confidence: 99%