2019
DOI: 10.1145/3355089.3356571
|View full text |Cite
|
Sign up to set email alerts
|

The relightables

Abstract: built the hardware and infrastructure components. ‡ Authors contributed equally to this work. S. Fanello led the volumetric capture algorithm and pipeline implementation, G. Fyffe led the relightability features and storage infrastructure, C. Rhemann led the capture hardware and software development, J. Taylor led the engineering and algorithmic optimizations. § Equally last.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 176 publications
(34 citation statements)
references
References 58 publications
0
28
0
Order By: Relevance
“…Recent deep learning methods also using retrieved data can be used at many stages of the avatar creation process. Some methods have been successfully used to create avatars from pictures by recreating full 3D meshes from a photo (Hu et al, 2017;Saito et al, 2019), meshes from multiple cameras Collet et al (2015); Guo et al (2019), reduce the generated artifacts (Blanz and Vetter, 1999;Ichim et al, 2015), as well as to improve rigging (Weng et al, 2019). Deep learning methods can also generate completely new avatars that are not representations of existing people, by using adversarial networks (Karras et al, 2019).…”
Section: Data Driven Methods and Scanningmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent deep learning methods also using retrieved data can be used at many stages of the avatar creation process. Some methods have been successfully used to create avatars from pictures by recreating full 3D meshes from a photo (Hu et al, 2017;Saito et al, 2019), meshes from multiple cameras Collet et al (2015); Guo et al (2019), reduce the generated artifacts (Blanz and Vetter, 1999;Ichim et al, 2015), as well as to improve rigging (Weng et al, 2019). Deep learning methods can also generate completely new avatars that are not representations of existing people, by using adversarial networks (Karras et al, 2019).…”
Section: Data Driven Methods and Scanningmentioning
confidence: 99%
“…Moreover, from a temporal point of view, we can also distinguish between capturing a single frame, and then performing rigging on the single frame, or capturing a sequence of these 3D representations, which is usually known as volumetric video. Finally, volumetric avatars can also be categorized as those that are captured offline (Collet et al, 2015;Guo et al, 2019) and played back as volumetric streams of data or those that are being captured and streamed in real-time (Orts-Escolano et al, 2016).…”
Section: Volumetric Avatarsmentioning
confidence: 99%
See 1 more Smart Citation
“…Both of these techniques have issues with generalizability, however, as the selected mesh may struggle to represent varied or large shape changes (Casas et al, 2012;Collet et al, 2015). An alternative approach currently in development by Google combines FVV with lightstage techniques (Debevec et al, 2000), using timemultiplexed color gradient illumination to capture temporally consistent reflectance maps (Guo et al, 2019). While the results demonstrated using this technique are impressive, it represents an additional step up in complexity, and such techniques are not yet commercially available.…”
Section: Lighting Issues Are Criticalmentioning
confidence: 99%
“…In this paper, we focus on the current generation of commercially available, high-quality FVV content. There are a number of lower-end commercially available FVV techniques, such as DepthKit (Depthkit, nd), as well as more advanced techniques in use in academic settings that are not currently available for commercial use (e.g., Guo et al, 2019). As it is important to reflect on the current technology landscape, we reference these techniques where appropriate.…”
Section: Introductionmentioning
confidence: 99%