2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01267
|View full text |Cite
|
Sign up to set email alerts
|

GraftNet: Towards Domain Generalized Stereo Matching with a Broad-Spectrum and Task-Oriented Feature

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(7 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…The SCARED test dataset, consisting of 8 different surgical videos with 3016 image pairs, was used to perform a comprehensive evaluation of our proposed network. Six state-of-the-art methods, including one local-optimization-based method [9] and five learning-based models [10,18,19,21,22], were chosen to conduct the comparison study with our network. To perform a fair comparison study, we adopted the same three datasets (Scene Flow, Sintel, and our self-made synthetic dataset) to train the existing models from scratch except for [9] since it is a parametric method that does not need to be trained.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The SCARED test dataset, consisting of 8 different surgical videos with 3016 image pairs, was used to perform a comprehensive evaluation of our proposed network. Six state-of-the-art methods, including one local-optimization-based method [9] and five learning-based models [10,18,19,21,22], were chosen to conduct the comparison study with our network. To perform a fair comparison study, we adopted the same three datasets (Scene Flow, Sintel, and our self-made synthetic dataset) to train the existing models from scratch except for [9] since it is a parametric method that does not need to be trained.…”
Section: Resultsmentioning
confidence: 99%
“…We selected three typical methods [18,21,22] from the comparison study to present their reconstruction results compared with ours. The motivation for choosing these three methods is that [18] performs the worst in accuracy, while [21] performs the best in the existing methods, and [22] is the latest model. In particular, we also visualized the error maps at the pixel level to demonstrate the error distribution.…”
Section: Tablementioning
confidence: 99%
See 1 more Smart Citation
“…In the literature [31], the radiometric properties of two conventional stereo image pairs and thirteen generalized stereo image pairs were studied in detail using WorldView-2 and GeoEye-1, and it was found that the inconsistent illumination of the different phase images seriously affects generalized stereo matching. Liu et al [32] attempted to achieve domain generalized stereo matching from the perspective of data, where the key was a broad-spectrum and task-oriented feature. The former property was derived from various styles of images seen during training, and the latter property was realized by recovering task-related information from broad-spectrum features.…”
Section: Introductionmentioning
confidence: 99%
“…More recently, the community focused on dealing with the problem at its source -i.e., during the training process itself, by designing specific strategies to drive the deep network learning domain-invariant features [5,11,31,69,70] while, eventually, the most modern stereo networks [24,30,63,65] can generalize much better than their predecessors. Despite these advancements, in the presence of very challenging conditions never observed during training, such as low illumination, sensor noise occurring at night, or the reflections appearing on rainy roads, we argue generalization capability alone might be insufficient.…”
Section: Introductionmentioning
confidence: 99%