2021
DOI: 10.1145/3450626.3459797
|View full text |Cite
|
Sign up to set email alerts
|

HodgeNet

Abstract: Fig. 1. Mesh segmentation results on the full-resolution MIT animation dataset. Each mesh in the dataset contains 20,000 faces (10,000 vertices). We show an example ground truth segmentation in the bottom-left. In contrast to previous works, which downsample each mesh by more than 10×, we efficiently process dense meshes both at train and test time.Constrained by the limitations of learning toolkits engineered for other applications, such as those in image processing, many mesh-based learning algorithms employ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(13 citation statements)
references
References 88 publications
0
13
0
Order By: Relevance
“…Other approaches, like [44], [45], [46] propose alternative solutions to the segmentation task. MeshWalker [44] represents mesh's geometry and topology by a set of random walks along the surface; these walks are fed to a recurrent neural network.…”
Section: A Mesh Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches, like [44], [45], [46] propose alternative solutions to the segmentation task. MeshWalker [44] represents mesh's geometry and topology by a set of random walks along the surface; these walks are fed to a recurrent neural network.…”
Section: A Mesh Segmentationmentioning
confidence: 99%
“…MeshWalker [44] represents mesh's geometry and topology by a set of random walks along the surface; these walks are fed to a recurrent neural network. HodgeNet [45], instead, tackles the problem relying on spectral geometry, and proposes parallelizable algorithms for differentiating eigencomputation, including approximate backpropagation without sparse computation. Finally, Diffu-sionNet [46] introduces a general-purpose approach to deep learning on 3D surfaces, using a simple diffusion layer to agnostically represent any mesh.…”
Section: A Mesh Segmentationmentioning
confidence: 99%
“…Laplacians make frequent appearances across geometry processing, machine learning and computational topology. Specific definitions and flavours vary widely across discrete exterior calculus [Crane et al 2013;Desbrun et al 2005], vector-field processing [de Goes et al 2016;Poelke and Polthier 2016;Vaxman et al 2016;Wardetzky 2020;Zhao et al 2019b], fluid simulation [Liu et al 2015], mesh segmentation and editing [Khan et al 2020;Lai et al 2008;Sorkine et al 2004], topological signal processing [Barbarossa and Sardellitti 2020], random walks [Lahav and Tal 2020;Schaub et al 2020], clustering and learning [Ebli et al 2020;Ebli and Spreemann 2019;Keros et al 2022;Nascimento and De Carvalho 2011;Smirnov and Solomon 2021;Su et al 2022]. Their ability to effectively capture salient geometric, topological, and dynamic information makes their spectrum a versatile basis.…”
Section: Related Workmentioning
confidence: 99%
“…1(e), once high‐frequency information in simplified meshes is overlooked, shape information will suffer severe distortion, e.g., flattened segmentation boundaries, potentially resulting in more segmentation errors. Recently, Sharp et al [SACO22] and Smirnov et al [SS21] presented their solutions to tackle these two issues. However, these solutions cannot eliminate high‐frequency information loss in such processing.…”
Section: Related Workmentioning
confidence: 99%