2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00484
|View full text |Cite
|
Sign up to set email alerts
|

Labeled from Unlabeled: Exploiting Unlabeled Data for Few-shot Deep HDR Deghosting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…We compare our results with previous state-of-the-art methods, including two patch-based methods [42,15] and seven CNN-based approaches [18,45,47,49,36,34,38]. Note that Kalantari et al [18] and Prabhakar et al [36] first align the input images using optical flow and Wu et al [45] apply homography transformation.…”
Section: Comparisonsmentioning
confidence: 99%
See 3 more Smart Citations
“…We compare our results with previous state-of-the-art methods, including two patch-based methods [42,15] and seven CNN-based approaches [18,45,47,49,36,34,38]. Note that Kalantari et al [18] and Prabhakar et al [36] first align the input images using optical flow and Wu et al [45] apply homography transformation.…”
Section: Comparisonsmentioning
confidence: 99%
“…Note that Kalantari et al [18] and Prabhakar et al [36] first align the input images using optical flow and Wu et al [45] apply homography transformation. Also, Prabhakar et al [38] use both of them for pre-alignment. We used the official codes if they are provided.…”
Section: Comparisonsmentioning
confidence: 99%
See 2 more Smart Citations
“…The methods in the first group [19,23,24,26,29,30,31,32,33,34,35,36,37,38] suffer from the problem of not accurately processing over-and under-exposed regions, but provide more compact systems. Methods in the second group use a group of bracketed overand under-exposed images as input to directly learn to generate HDR output [39,39,40,41,42,43,44,45,46], as similarly utilized in photographic HDR generation. Methods belonging to these two groups differ mainly according to their network components, such as non-local blocks [42] and attention mechanisms [40].…”
Section: Related Workmentioning
confidence: 99%