2018
DOI: 10.1016/j.patcog.2017.04.020
|View full text |Cite
|
Sign up to set email alerts
|

Learn to model blurry motion via directional similarity and filtering

Abstract: It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 53 publications
(96 reference statements)
0
4
0
Order By: Relevance
“…One category of methods involves the estimation of optical flow through neural network. FlowNet (Dosovitskiy et al, 2015;Ilg et al, 2017), MotionNet (Zhao et al, 2018a), LMoF (Li et al, 2018), TVNet (Fan et al, 2018) and more recently Representation Flow (Piergiovanni & Ryoo, 2019) all belong to such category. More specifically, FlowNet (Dosovitskiy et al, 2015) learns optical flow from synthetic ground truth data.…”
Section: Temporal Feature Extraction Without Optical Flowmentioning
confidence: 99%
See 1 more Smart Citation
“…One category of methods involves the estimation of optical flow through neural network. FlowNet (Dosovitskiy et al, 2015;Ilg et al, 2017), MotionNet (Zhao et al, 2018a), LMoF (Li et al, 2018), TVNet (Fan et al, 2018) and more recently Representation Flow (Piergiovanni & Ryoo, 2019) all belong to such category. More specifically, FlowNet (Dosovitskiy et al, 2015) learns optical flow from synthetic ground truth data.…”
Section: Temporal Feature Extraction Without Optical Flowmentioning
confidence: 99%
“…MotionNet (Zhao et al, 2018a) produces optical flow through next frame prediction. LMoF (Li et al, 2018) further constructs a learnable directional filtering layer to cope with optical flow estimation in blur videos. To further boost the performance of optical flow estimation, TVNet (Fan et al, 2018) unfolds the TV-L1 (Zach et al, 2007) optical flow extraction method and formulates it with neural network.…”
Section: Temporal Feature Extraction Without Optical Flowmentioning
confidence: 99%
“…It should be noted that the performance of this approach relies heavily on the searched nearest neighbors to the query patches in the input blurry low-res image. Besides, as claimed in [49], the modeling idea in [48] cannot be naively applicable to the blind deblurring task [85][86][87][88][89]. Taking into account the similarity between blind deblurring and blind super-resolution in terms of nonparametric blur kernel estimation, the first author of the present paper and his collaborator recently propose to formulate both blind problems in a common modeling perspective [50], i.e., bi-L 0 -L 2 -norm regularization [51].…”
Section: Nonparametric Blind Sisrmentioning
confidence: 99%
“…To automatically extract structural assets, correspondences e.g. optical flow [16][17][18][19][20][21][22][23][24], from any input CAD file to analysed files is introduced. Such correspondences give a hidden link from unknown visual elements to reference and further prorogate the actual properties back to the input CAD.…”
Section: Converting Floor Plan Cad Filesmentioning
confidence: 99%