“…It introduces a graph transformer architecture with four new properties compared to the standard model, which are: 1) an attention mechanism which is a function of the neighborhood connectivity for each node in the graph; 2) positional encoding represented by the Laplacian eigenvectors, which naturally generalize the sinusoidal positional encoding often used in NLP; 3) a batch normalization layer in contrast to the layer normalization; 4) edge feature representation. MeshFormer [57] propose a mesh segmentation method based on graph transformers, which uses a boundary-preserving simplification to reduce the data size, a Ricci flow-based clustering algorithm for constructing hierarchical structures of meshes, and a graph transformer with cross-resolution convolutions, which extracts richer high-resolution semantic. Recently [58] introduced a novel method for 3D mesh segmentaion named Navigation Geodesic Distance Transformer (NGD-Transformer).…”