2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00388
|View full text |Cite
|
Sign up to set email alerts
|

SMURF: Self-Teaching Multi-Frame Unsupervised RAFT with Full-Image Warping

Abstract: We present SMURF, a method for unsupervised learning of optical flow that improves state of the art on all benchmarks by 36% to 40% (over the prior best method UFlow) and even outperforms several supervised approaches such as PWC-Net and FlowNet2. Our method integrates architecture improvements from supervised optical flow, i.e. the RAFT model, with new ideas for unsupervised learning that include a sequence-aware self-supervision loss, a technique for handling out-of-frame motion, and an approach for learning… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(55 citation statements)
references
References 37 publications
(83 reference statements)
0
55
0
Order By: Relevance
“…SelFlow [23] and Autoflow [36] are two selfsupervised methods that generate synthetic annotations. SMURF [34] integrates a set of techniques to do selfsupervised learning on unannotated video frames and has achieved promising results.…”
Section: Related Workmentioning
confidence: 99%
“…SelFlow [23] and Autoflow [36] are two selfsupervised methods that generate synthetic annotations. SMURF [34] integrates a set of techniques to do selfsupervised learning on unannotated video frames and has achieved promising results.…”
Section: Related Workmentioning
confidence: 99%
“…Geometric priors are orthogonal to our approach and combining different forms of inductive biases is a promising direction for future work. Motion segmentation is concerned with separating objects from the background using optical flow [30,56,58]. Early approaches [7,35,43,43] tracked individual pixels with the flow and then clustered the resulting trajectories inspired by the common fate principle [38].…”
Section: Related Workmentioning
confidence: 99%
“…We experiment with two motion segmentation algorithms -a heuristic-based [35], and a learning-based one [15], for which we only use the motion stream trained on the toy Fliy-ingThings3D dataset [41]. Both methods take optical flow as input, so we evaluate them with both ground truth flow, and flow estimated with the state-of-the-art supervised [58] and unsupervised [56] approaches. Since the outputs of both methods contain many noisy segments, we apply a few generic post-processing steps to clean up the results.…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Self-supervised learning for optical flow. Significant progress has been achieved with selfsupervised learning for optical flow [20,24,30,38,56], focusing more on the loss than model architecture. UFlow [20] systematically studied a set of key components for self-supervised optical flow, including both model elements and training techniques.…”
Section: Previous Workmentioning
confidence: 99%