2021
DOI: 10.1109/tip.2021.3093781
|View full text |Cite
|
Sign up to set email alerts
|

OIFlow: Occlusion-Inpainting Optical Flow Estimation by Unsupervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…DistillFlow [43] trains multiple teacher models and introduces a confidence based two-stage distillation approach for improvement. OIFlow [44] puts up an occlusion-inpainting framework to make full use of occlusion regions. Recently, ASFlow [45] presents content-aware pooling and adaptive flow upsampling modules to improve pyramidbased unsupervised flow deep structure.…”
Section: B Learning Unsupervised Optical Flowmentioning
confidence: 99%
“…DistillFlow [43] trains multiple teacher models and introduces a confidence based two-stage distillation approach for improvement. OIFlow [44] puts up an occlusion-inpainting framework to make full use of occlusion regions. Recently, ASFlow [45] presents content-aware pooling and adaptive flow upsampling modules to improve pyramidbased unsupervised flow deep structure.…”
Section: B Learning Unsupervised Optical Flowmentioning
confidence: 99%
“…Several works [28], [60], [61], [62] focus on dealing with the occlusion problem by forwardbackward occlusion checking, range-map occlusion checking, data distillation, and augmentation regularization loss. Other methods concentrate on optical flow learning by improving image alignment, including the census loss [60], formulation of multiframes [63], epipolar constraints [64], depth constraints [65], feature similarity constraints [66], and occlusion inpainting [67]. UFlow [68] proposed a unified framework to systematically analyze and integrate different unsupervised components.…”
Section: Optical Flowmentioning
confidence: 99%
“…Comparisons with unsupervised methods. We use the conventional unsupervised algorithm [81], [82], [83] (using photometic loss and smooth loss) to pretrain FlowFormer for comparison. Our MCVA outperforms the unsupervised counterpart.…”
Section: Ablation Study On Mcva Pretrainingmentioning
confidence: 99%
“…Comparisons with Unsupervised Methods. We also use conventional unsupervised methods to pretrain Flow-Former with photometric loss and smooth loss following [81], [82], [83] and then finetune it in the 'C+T' setting as MVCA-FlowFormer. As shown in Tab.…”
Section: Size Of Q Lmentioning
confidence: 99%