2022
DOI: 10.48550/arxiv.2201.00424
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Splicing ViT Features for Semantic Appearance Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
(58 reference statements)
0
1
0
Order By: Relevance
“…An example using three patches from different views is shown in Figure 3. Similar to (Tumanyan et al, 2022), we observe that the [CLS] token from a self-supervised pre-trained ViT backbone can capture high-level semantic appearances and can effectively discover similarities between patches during the proposed end-to-end optimization process.…”
Section: Cross View Appearance Correspondencementioning
confidence: 66%
“…An example using three patches from different views is shown in Figure 3. Similar to (Tumanyan et al, 2022), we observe that the [CLS] token from a self-supervised pre-trained ViT backbone can capture high-level semantic appearances and can effectively discover similarities between patches during the proposed end-to-end optimization process.…”
Section: Cross View Appearance Correspondencementioning
confidence: 66%