2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412520
|View full text |Cite
|
Sign up to set email alerts
|

Motion-supervised Co-Part Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 28 publications
0
22
0
Order By: Relevance
“…Following related work [15,30,46], we analyze the improvement of our method in terms of the positioning of parts and the mask coverage on the established benchmarks. We provide results for a wide variety of images containing faces, animals, flowers, and humans.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following related work [15,30,46], we analyze the improvement of our method in terms of the positioning of parts and the mask coverage on the established benchmarks. We provide results for a wide variety of images containing faces, animals, flowers, and humans.…”
Section: Methodsmentioning
confidence: 99%
“…For example, in portrait images, the hair is often not masked. Temporal information can be used [9,46] to achieve better segmentation results. In comparison to all of these approaches, our model uses less information (single images without video or saliency map) yet outperform these on half of the most established metrics and datasets, as we evaluate in our experiments, Section 4.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These works significantly demonstrate that motion information can and should be adopted for inferring meaningful object parts. Motion-supervised co-part segmentation [30], the pioneering work proposed a novel architecture via an approach of self-supervised and reconstruction for co-part segmentation. It constitutes intermediate motion representations robust to sensor changes and appearance variations, as well as eliminates the background disturbance.…”
Section: Motion Descriptionmentioning
confidence: 99%
“…The motion flow estimator aims to predict a dense motion field from a referred body image to a target body image with the same identity. Inspired by previous approaches [29,30], the motion estimator module proceeds in two steps. Firstly, we approximate both transformations from sets of sparse trajectories, obtained by using keypoints learned in a self-supervised manner.…”
Section: Motion Flow Estimatormentioning
confidence: 99%