2021
DOI: 10.1109/tmm.2020.2987706
|View full text |Cite
|
Sign up to set email alerts
|

Frequency-Dependent Depth Map Enhancement via Iterative Depth-Guided Affine Transformation and Intensity-Guided Refinement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 44 publications
0
17
0
Order By: Relevance
“…For example, Zhao et al [152] propose a color-depth conditional generative network (CDcGAN) to simultaneously super-resolve the low-resolution depth and color images. To reduce the artifacts caused by the differences of distributions between the depth and its corresponding color image, Zuo et al [159] propose a depth-guided affine transformation network in which depth-guided color feature filtering and color-guided depth feature refinement are performed iteratively to progressively enhance the network representation ability. Moreover, all the refined depth features are concatenated to make full use of the iterations.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Zhao et al [152] propose a color-depth conditional generative network (CDcGAN) to simultaneously super-resolve the low-resolution depth and color images. To reduce the artifacts caused by the differences of distributions between the depth and its corresponding color image, Zuo et al [159] propose a depth-guided affine transformation network in which depth-guided color feature filtering and color-guided depth feature refinement are performed iteratively to progressively enhance the network representation ability. Moreover, all the refined depth features are concatenated to make full use of the iterations.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…CGN [159] TMM-2020 Proposes a coarse to fine framework which contains a depth-guided intensity features filtering module and a intensity-guided depth features refinement module to mitigate the artifacts caused by edge misalignment.…”
Section: Dsrn [127] Pr-2020mentioning
confidence: 99%
“…To address texture-copy artifacts, Zuo et al [44] proposed a local affine transformation to filter out the unrelated intensity features explicitly by Hadamard product operations. Deng et al [45] split the common information from different modalities by designing extraction modules for unique feature and common feature respectively.…”
Section: Deep Depth Image Super Resolutionmentioning
confidence: 99%
“…Li et al [ 8 ] introduced synchronized RGB images to align with the depth image, and preprocessed the color image and the depth image to extract the effective supporting edge area, which ensured the edge details of the image while repairing the depth map. Zuo et al [ 9 ] proposed a frequency-dependent depth map enhancement algorithm via iterative depth guided affine transformation and intensity-guided refinement, and improved performances based on qualitative and quantitative evaluations are demonstrated.…”
Section: Introductionmentioning
confidence: 99%
“…Li et al [8] introduced synchronized RGB images to align with the depth image, and preprocessed the color image and the depth image to extract the effective supporting edge area, which ensured the edge details of the image while repairing the depth map. Zuo et al [9] proposed a frequency-dependent depth map enhancement algorithm via iterative depth guided affine transformation and intensity-guided refinement, and improved performances based on qualitative and quantitative evaluations are demonstrated. To approximate the real depth map, it is hard to train the depth map to be repaired directly, however, if the jump structure of residual network (ResNet) is adopted, then the training goal becomes to approximate the difference, which is between the depth map to be repaired and the real depth map.…”
Section: Introductionmentioning
confidence: 99%