2022
DOI: 10.48550/arxiv.2210.12381
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

S2WAT: Image Style Transfer via Hierarchical Vision Transformer using Strips Window Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 0 publications
0
1
0
Order By: Relevance
“…In order to verify the effectiveness of the method in this research, the model was compared with PAMA [15], S2WAT [13], StyTr-2 [12], and AesUST [11]. The transfer efficiency was first assessed, and the time required to generate single stylized images of two different size sizes, 256 × 256 and 512 × 512, is shown in Table 1.…”
Section: Objective Evaluation Of Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In order to verify the effectiveness of the method in this research, the model was compared with PAMA [15], S2WAT [13], StyTr-2 [12], and AesUST [11]. The transfer efficiency was first assessed, and the time required to generate single stylized images of two different size sizes, 256 × 256 and 512 × 512, is shown in Table 1.…”
Section: Objective Evaluation Of Resultsmentioning
confidence: 99%
“…Wang et al [11] incorporated aesthetic features into the style transfer network with an aesthetic discriminator. Both Deng et al [12] and Zhang et al [13] used transformer-based approaches [21] for arbitrary image style transfer. Since then, several style transfer methods based on contrastive learning have emerged [22][23][24].…”
Section: Image Style Transfermentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, an increasing number of reports on the cross-fusion between style transfer algorithms and classic models have emerged. Zhang et al [ 26 ] proposed a hierarchical vision transformer using strip window attention. This approach realizes accurate style transfer by focusing on local image domains and adapting to a wide range of styles.…”
Section: Introductionmentioning
confidence: 99%