2020
DOI: 10.1111/cgf.14122
|View full text |Cite
|
Sign up to set email alerts
|

Memory‐Efficient Bijective Parameterizations of Very‐Large‐Scale Models

Abstract: As high-precision 3D scanners become more and more widespread, it is easy to obtain very-large-scale meshes containing at least millions of vertices. However, processing these very-large-scale meshes is still a very challenging task due to memory limitations. This paper focuses on a fundamental geometric processing task, i.e., bijective parameterization construction. To this end, we present a spline-enhanced method to compute bijective and low distortion parameterizations for very-large-scale disk topology mes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 51 publications
(88 reference statements)
0
2
0
Order By: Relevance
“…However, due to the memory limitation of the used computer, the commonly developed methods for creating inversion-free mappings may fail for these models. Ye et al [89] use the scaffold-based method to compute bijective parameterizations for very-largescale models. Instead of computing descent directions using the mesh vertices as variables, they estimate descent directions for each vertex by optimizing a proxy energy defined in spline spaces.…”
Section: Scaffold-based Methodsmentioning
confidence: 99%
“…However, due to the memory limitation of the used computer, the commonly developed methods for creating inversion-free mappings may fail for these models. Ye et al [89] use the scaffold-based method to compute bijective parameterizations for very-largescale models. Instead of computing descent directions using the mesh vertices as variables, they estimate descent directions for each vertex by optimizing a proxy energy defined in spline spaces.…”
Section: Scaffold-based Methodsmentioning
confidence: 99%
“…In 3‐point tracking, the root orientation and position are estimated using the input data, transforming 3‐point tracking into a 4‐point tracking problem. Ye et al [YLHX22] predict full‐body pose from 3 tracking points and then feed the estimated pose into a reinforcement learning module to generate the final pose in a physical simulation. Unfortunately, these approaches train their neural network in a deterministic manner, and thus produce average poses for highly ambiguous cases.…”
Section: Related Workmentioning
confidence: 99%