SIGGRAPH Asia 2020 Courses 2020
DOI: 10.1145/3415263.3419160
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating 3D deep learning with PyTorch3D

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(13 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…In the decoder part, we have four components, each of which is to be trained in this stage: (1) the trained shape basis of SFM, (2) the expression basis D exp from BFM2017 [14], (3) the albedo basis D albedo from Ref. [59], (4) the rendering layer takes the geometric, albedo, illumination, pose parameter, and camera parameter and renders 224×224 RGB images, which is based on Pytorch3d [60]. The illumination model is a spherical harmonic illumination model.…”
Section: Learning Frameworkmentioning
confidence: 99%
“…In the decoder part, we have four components, each of which is to be trained in this stage: (1) the trained shape basis of SFM, (2) the expression basis D exp from BFM2017 [14], (3) the albedo basis D albedo from Ref. [59], (4) the rendering layer takes the geometric, albedo, illumination, pose parameter, and camera parameter and renders 224×224 RGB images, which is based on Pytorch3d [60]. The illumination model is a spherical harmonic illumination model.…”
Section: Learning Frameworkmentioning
confidence: 99%
“…During training, the 3D shape is rendered at 5 views, whose pitch and yaw angles are (0 , 0 • ), and the L2 distances of the rendered images between the output and the ground-truth 3D shapes are formulated as the visual-effect distance: where v is the view index, R(•, •) is the renderer whose input is 3D shape and texture, R v is the rotation matrix corresponding to each view, and T w is the all-white texture under orthogonal light. If we employ a differentiable renderer [54], the D psd can be directly regarded as a loss function. As shown in Fig.…”
Section: Plaster Sculpture Descriptormentioning
confidence: 99%
“…The incomplete texture map is run through a generator network. The synthesized texture map is then wrapped to the face mesh and the mesh is rendered using a diferentiable renderer [22] from a randomly sampled viewpoint. The random sampling ensures that areas of the texture map which were occluded in the original input image are now visible on the rendered image.…”
Section: Uv Texture Completionmentioning
confidence: 99%