2018
DOI: 10.1007/978-3-030-01231-1_22
|View full text |Cite
|
Sign up to set email alerts
|

PARN: Pyramidal Affine Regression Networks for Dense Semantic Correspondence

Abstract: This paper presents a deep architecture for dense semantic correspondence, called pyramidal affine regression networks (PARN), that estimates locally-varying affine transformation fields across images. To deal with intra-class appearance and shape variations that commonly exist among different instances within the same object category, we leverage a pyramidal model where affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(53 citation statements)
references
References 44 publications
0
53
0
Order By: Relevance
“…Motivated by the procedure of non-rigid image registration between an image pair, the RTNs [40] use recurrent networks to progressively compute dense correspondences between two images. In addition to estimating geometric transformations between an image pair [18], [38], [40], another line of research focuses on establishing dense per-pixel correspondences without using any geometric models [39]. Rocco Addressing semantic matching (left) or object co-segmentation (right) in isolation often suffers from the effect of background clutters (for semantic matching) or only focuses on segmenting the discriminative parts (for object co-segmentation).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Motivated by the procedure of non-rigid image registration between an image pair, the RTNs [40] use recurrent networks to progressively compute dense correspondences between two images. In addition to estimating geometric transformations between an image pair [18], [38], [40], another line of research focuses on establishing dense per-pixel correspondences without using any geometric models [39]. Rocco Addressing semantic matching (left) or object co-segmentation (right) in isolation often suffers from the effect of background clutters (for semantic matching) or only focuses on segmenting the discriminative parts (for object co-segmentation).…”
Section: Related Workmentioning
confidence: 99%
“…Although our method performs slightly worse than the OHG [30] on the PASCAL category, the OHG method uses additional images from the PASCAL VOC 2007 dataset. In the bottom block of Table 1, we follow the PARN [38] and RTNs [40], and resize all images to the larger dimension to 100 (i.e., resizing max(H, W ) to 100). The proposed method also performs favorably against all competing methods.…”
Section: Joint Matching and Co-segmentationmentioning
confidence: 99%
“…To deal with locally-varying geometric deformations, some methods such as UCN [7] and CAT-FCSS [25] were proposed based on STNs [18]. Recently, PARN [19], NC-Net [43], and RTNs [23] were proposed to estimate locally-varying transformation fields using a coarse-to-fine scheme [19], neighbourhood consensus [43], and an iteration technique [23]. These methods [19,43,23] presume that the attribute variations between source and target images are negligible in the deep feature space.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, PARN [19], NC-Net [43], and RTNs [23] were proposed to estimate locally-varying transformation fields using a coarse-to-fine scheme [19], neighbourhood consensus [43], and an iteration technique [23]. These methods [19,43,23] presume that the attribute variations between source and target images are negligible in the deep feature space. However, in practice the deep features often show limited performance in handling different attributes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation