2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093294
|View full text |Cite
|
Sign up to set email alerts
|

RPM-Net: Robust Pixel-Level Matching Networks for Self-Supervised Video Object Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…MAMP outperforms existing self-supervised methods, and is on par with some supervised methods trained with large amounts of annotated data. Notation: Video Colorization [34], RPM-Net [11], CycleTime [38], CorrFlow [13], MuG [19], UVC [14], MAST [12], OSVOS [1], RANet [39], OSVOS-S [21], GC [16], OSMN [42], SiamMask [37], OnAVOS [33], FEELVOS [32], AFB-URR [17], PReMVOS [20], STM [24], KMN [28], CFBI [43] Semi-supervised video object segmentation techniques fall into two categories: supervised and self-supervised. Supervised approaches [24,43] use the rich annotation information in training data to learn the model achieving great success in video object segmentation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…MAMP outperforms existing self-supervised methods, and is on par with some supervised methods trained with large amounts of annotated data. Notation: Video Colorization [34], RPM-Net [11], CycleTime [38], CorrFlow [13], MuG [19], UVC [14], MAST [12], OSVOS [1], RANet [39], OSVOS-S [21], GC [16], OSMN [42], SiamMask [37], OnAVOS [33], FEELVOS [32], AFB-URR [17], PReMVOS [20], STM [24], KMN [28], CFBI [43] Semi-supervised video object segmentation techniques fall into two categories: supervised and self-supervised. Supervised approaches [24,43] use the rich annotation information in training data to learn the model achieving great success in video object segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…Comparison on DAVIS-2017 with other methods.MAMP outperforms existing self-supervised methods, and is on par with some supervised methods trained with large amounts of annotated data. Notation: Video Colorization[34], RPM-Net[11], CycleTime[38], CorrFlow[13], MuG[19], UVC[14], MAST[12], OSVOS[1], RANet[39], OSVOS-S[21], GC[16], OSMN[42], SiamMask[37], OnAVOS[33], FEELVOS[32], AFB-URR[17], PReMVOS[20], STM[24], KMN[28], CFBI[43]…”
mentioning
confidence: 99%
“…IDAM [32] integrates the iterative distance-aware similarity convolution module into the matching process, which can overcome the shortcoming of using the inner product to obtain pointwise similarity. RPM-Net [33] combines Sinkhorn's method with deep learning to build soft correspondence from mixed features, thereby enhancing robustness to noise. Soft correspondence can improve robustness, but they will lead to a decrease in registration accuracy.…”
Section: Learing-based Registration Methodsmentioning
confidence: 99%
“…It has shown promising capacity on various downstream tasks as it does not require annotations and can better generalize (Vondrick et al 2018;Han, Xie, and Zisserman 2019;Li et al 2019;Kim, Cho, and Kweon 2019;Wang, Jiao, and Liu 2020;Tao, Wang, and Yamasaki 2020;Pan et al 2021). Many pretext tasks have been explored for self-supervised learning such as future frame prediction (Liu et al 2018), query frame reconstruction (Lai and Xie 2019;Kim et al 2020;Lai, Lu, and Xie 2020), patch re-localization (Wang, Jabri, and Efros 2019;Lu et al 2020), and motion statistics prediction (Wang et al 2019a).…”
Section: Related Workmentioning
confidence: 99%