2022
DOI: 10.1109/tnnls.2021.3069230
|View full text |Cite
|
Sign up to set email alerts
|

Multigraph Transformer for Free-Hand Sketch Recognition

Abstract: Learning meaningful representations of free-hand sketches remains a challenging task given the signal sparsity and the high-level abstraction of sketches. Existing techniques have focused on exploiting either the static nature of sketches with Convolutional Neural Networks (CNNs) or the temporal sequential property with Recurrent Neural Networks (RNNs). In this work, we propose a new representation of sketches as multiple sparsely connected graphs. We design a novel Graph Neural Network (GNN), the Multi-Graph … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 46 publications
(46 citation statements)
references
References 47 publications
(36 reference statements)
0
31
0
Order By: Relevance
“…Mathematical Theory Related to Attention Self-attention [44] is essentially processing each input as a fully-connected graph [51]. Therefore, as aforementioned, we start from a more general perspective of topological spaces [13] to rethink MAE, by regarding each image as a graph connecting patches instead of 2D pixel grid.…”
Section: Related Workmentioning
confidence: 99%
“…Mathematical Theory Related to Attention Self-attention [44] is essentially processing each input as a fully-connected graph [51]. Therefore, as aforementioned, we start from a more general perspective of topological spaces [13] to rethink MAE, by regarding each image as a graph connecting patches instead of 2D pixel grid.…”
Section: Related Workmentioning
confidence: 99%
“…We propose an approach to integrate a concise encoding of knowledge graphs into a Transformer-based decoder architecture for knowledge-grounded dialogue generation. Transformers for natural language generation can be viewed as graph neural networks which use selfattention (Veličković et al, 2018) for neighborhood aggregation on fully-connected word graphs (Xu et al, 2019). We utilize this relationship and restrict the self-attention weights to match the underlying graph structure.…”
Section: Contributionsmentioning
confidence: 99%
“…The main advantage of the transformer is its global computation and perfect memory mechanism, which makes it more suitable than Recurrent Neural Networks on long sequences. Nowadays, transformer is widely studied in various tasks, including natural language processing [30], [31], computer vision [32], [33] and data mining [34]. Inspired by the success of these works, we apply a transformer to learn the long-rang spatial distribution of fine-grained traffic flow.…”
Section: Transformer Architecturementioning
confidence: 99%