2023
DOI: 10.1109/tpami.2022.3223789
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Global and Local Homography Estimation With Motion Basis Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…This may be because that our method estimates H from local regions, which is affected by low texture regions in these scenes. When compared with multi-H methods (Row 12-15), the SOTA method is MeshBasesHomo (Liu et al 2022a). Our method outperforms it by 13% (0.79→0.69) when using similar amount of H (our method uses at most 4 H).…”
Section: Comparison With Existing Methodsmentioning
confidence: 88%
See 3 more Smart Citations
“…This may be because that our method estimates H from local regions, which is affected by low texture regions in these scenes. When compared with multi-H methods (Row 12-15), the SOTA method is MeshBasesHomo (Liu et al 2022a). Our method outperforms it by 13% (0.79→0.69) when using similar amount of H (our method uses at most 4 H).…”
Section: Comparison With Existing Methodsmentioning
confidence: 88%
“…Dataset Our method is evaluated on a natural image dataset (Zhang et al 2020;Liu et al 2022a) with 75.8k training pairs and 4.2k testing pairs. The scenes in the dataset are roughly categorized into five types: REgular (RE), Low Texture (LT), Low Light (LL), Small Foreground (SF) and Large Foreground (LF), where the last four types are more challenging.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…As illustrated by Fig. 2(b), we follow existing works (Babbar and Bajaj 2022;Ye et al 2021;Liu et al 2022b) to simulate the dynamic changes of k t during the motion by computing the inter-frame Homography matrix H t−1→t , formulated as:…”
Section: Problem Formulation and Datasetmentioning
confidence: 99%