2021
DOI: 10.48550/arxiv.2103.03841
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generating Images with Sparse Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 0 publications
0
21
0
Order By: Relevance
“…The DRConv proposes to dynamically select the CNN filters whereas there is still local context exploited, while the DynamicViT dynamically sparsifies tokens, which may underperform on dense prediction tasks due to the attenuation of fine-grained local interactions. The DCTransformer [18] transits the view of solving the problem into frequency domain and demonstrates the sparse representations can carry sufficient information for generating images. Similarly, the work [39] also converts the input image into frequency domain for visual understanding.…”
Section: Redundancy Reduction Methodsmentioning
confidence: 99%
“…The DRConv proposes to dynamically select the CNN filters whereas there is still local context exploited, while the DynamicViT dynamically sparsifies tokens, which may underperform on dense prediction tasks due to the attenuation of fine-grained local interactions. The DCTransformer [18] transits the view of solving the problem into frequency domain and demonstrates the sparse representations can carry sufficient information for generating images. Similarly, the work [39] also converts the input image into frequency domain for visual understanding.…”
Section: Redundancy Reduction Methodsmentioning
confidence: 99%
“…Further, recent works have shown promise for storing compressed datasets as functions (Dupont et al, 2021a;Chen et al, 2021;Strümpler et al, 2021;Zhang et al, 2021). Using our framework, it may therefore become possible to train deep learning models directly on these compressed datasets, which is challenging for traditional compressed formats such as JPEG (although image-specific exceptions such as Nash et al (2021) exist). In addition, learning distributions of functa is likely to improve entropy coding and hence compression for these frameworks (Ballé et al, 2016).…”
Section: Conclusion Limitations and Future Workmentioning
confidence: 99%
“…1 we evaluate our approach against a variety of other models in terms of Precision, Recall, Density, and Coverage (PRDC) [44,50,63], metrics that quantify the overlap between the data and sample distributions. Due to limited computing resources, we are unable to provide density and coverage scores for DCT [51] and PRDC scores for StyleGAN2 on LSUN Bedroom since training on a standard GPU would take more than 30 days per experiment, signif- icantly more than the 10 days required to train our models. On the LSUN datasets our approach achieves the highest Precision, Density, and Coverage; indicating that the data and sample manifolds have the most overlap.…”
Section: Sample Qualitymentioning
confidence: 99%
“…In this work, we compare approaches using Precision and Recall [63] approaches which, unlike FID, evaluate sample quality and diversity separately and have been used in similar recent work assessing high-resolution image generation [30,37,51,59]. Precision is the expected likelihood of fake samples lying on the data manifold and recall vice versa.…”
Section: Limitations Of Fid Metricmentioning
confidence: 99%
See 1 more Smart Citation