2019
DOI: 10.1111/cgf.13857
|View full text |Cite
|
Sign up to set email alerts
|

High Dynamic Range Point Clouds for Real‐Time Relighting

Abstract: Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…After the complete read, it was noted that (Nunes et al, 2017) does not show usage of augmented reality indeed, and (Pessoa et al, 2012) only presents a transformation of the work (Pessoa et al, 2010) into an API with the same techniques. We also consider that (Sabbadin et al, 2019) does not present a photorealistic solution since they disregard geometric coherency for virtual object placement. Besides, it is possible to differentiate between virtual and real content visually in a clear way through results.…”
Section: Conducting the Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…After the complete read, it was noted that (Nunes et al, 2017) does not show usage of augmented reality indeed, and (Pessoa et al, 2012) only presents a transformation of the work (Pessoa et al, 2010) into an API with the same techniques. We also consider that (Sabbadin et al, 2019) does not present a photorealistic solution since they disregard geometric coherency for virtual object placement. Besides, it is possible to differentiate between virtual and real content visually in a clear way through results.…”
Section: Conducting the Reviewmentioning
confidence: 99%
“…Besides, it is possible to differentiate between virtual and real content visually in a clear way through results. For these reasons, we decided to exclude Nunes et al (2017); Pessoa et al (2012) and Sabbadin et al (2019) from the extraction of responses, even though they reached the 8 points required in the quality criteria. This way, 22 papers were selected for final analysis with the extraction of the answers, and 23 were discarded in phase 3.…”
Section: Conducting the Reviewmentioning
confidence: 99%
“…In addition, LiDAR systems often have low spatial resolution and may miss vital information about the environment [1][2][3]. As a result, raw point clouds collected by LiDAR can be sparse and incomplete, leading to significant differences from the actual geometry of objects and affecting the sensor system's perception of the environment [4,5]. To solve these problems, point cloud completion techniques can be utilized to reconstruct and restore the missing information in sparse and incomplete point clouds.…”
Section: Introductionmentioning
confidence: 99%