2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01208
|View full text |Cite
|
Sign up to set email alerts
|

Quality-Agnostic Image Recognition via Invertible Decoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…TransINR [8] employs the Transformer [37] as a hypernetwork to predict latent vectors to modulate the weights of the shared MLP. In addition, Instance Pattern Composers [19] have demonstrated that modulating the weights of the second MLP layer is enough to achieve high performance of generalizable INRs. Our framework also employs the Transformer encoder, but focuses on extracting locality-aware latent features for the high performance of generalizable INR.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…TransINR [8] employs the Transformer [37] as a hypernetwork to predict latent vectors to modulate the weights of the shared MLP. In addition, Instance Pattern Composers [19] have demonstrated that modulating the weights of the second MLP layer is enough to achieve high performance of generalizable INRs. Our framework also employs the Transformer encoder, but focuses on extracting locality-aware latent features for the high performance of generalizable INR.…”
Section: Related Workmentioning
confidence: 99%
“…A generalizable INR uses a single coordinate-based MLP as a shared INR decoder F θ : R din → R dout to represent multiple data instances as a continuous function. Generalizable INR [8,11,12,19,26] extracts the R number of latent codes…”
Section: Generalizable Implicit Neural Representationmentioning
confidence: 99%
See 3 more Smart Citations