2016
DOI: 10.48550/arxiv.1611.01704
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-end Optimized Image Compression

Johannes Ballé,
Valero Laparra,
Eero P. Simoncelli

Abstract: We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
367
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 164 publications
(370 citation statements)
references
References 33 publications
3
367
0
Order By: Relevance
“…Using our framework, it may therefore become possible to train deep learning models directly on these compressed datasets, which is challenging for traditional compressed formats such as JPEG (although image-specific exceptions such as Nash et al (2021) exist). In addition, learning distributions of functa is likely to improve entropy coding and hence compression for these frameworks (Ballé et al, 2016).…”
Section: Conclusion Limitations and Future Workmentioning
confidence: 99%
“…Using our framework, it may therefore become possible to train deep learning models directly on these compressed datasets, which is challenging for traditional compressed formats such as JPEG (although image-specific exceptions such as Nash et al (2021) exist). In addition, learning distributions of functa is likely to improve entropy coding and hence compression for these frameworks (Ballé et al, 2016).…”
Section: Conclusion Limitations and Future Workmentioning
confidence: 99%
“…The structures of generative models to estimate likelihood are diverse. Variational auto-encoder based generative models are first used in learning-based lossy image compression like [2], [3] and [4]. Similar structures are adopted for learning-based lossless image compression like [5].…”
Section: Related Workmentioning
confidence: 99%
“…CAE is the most popular framework for the LIC. Based on a well-designed network architecture, CAE with factorized prior [26] can reach a comparable coding gain with JPEG2000. With a more powerful entropy model called hyperprior, [27] can achieve the BPG performance.…”
Section: A Learned Image Compressionmentioning
confidence: 99%