2017
DOI: 10.48550/arxiv.1703.00395
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Lossy Image Compression with Compressive Autoencoders

Abstract: We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train dee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
196
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 109 publications
(198 citation statements)
references
References 16 publications
0
196
0
Order By: Relevance
“…RNN based models [16,34,35] compress images or residual information from the previous step iteratively, while CNN based models typically transform images into compact latent representations for further entropy coding. Some early works [1,4,33] solve the problem of non-differential quantization and rate estimation. Afterward, some works [4,5,9,13,14,19,28] focus on designing powerful entropy models to improve the accuracy of rate estimation.…”
Section: Image Compressionmentioning
confidence: 99%
“…RNN based models [16,34,35] compress images or residual information from the previous step iteratively, while CNN based models typically transform images into compact latent representations for further entropy coding. Some early works [1,4,33] solve the problem of non-differential quantization and rate estimation. Afterward, some works [4,5,9,13,14,19,28] focus on designing powerful entropy models to improve the accuracy of rate estimation.…”
Section: Image Compressionmentioning
confidence: 99%
“…We reconstructed the data from the first N z principal components to obtain the data compression. Autoencoder also provides data compression by constraining the number of codes (N z ) to a small number 24,25 . We used a fully connected neural-network encoder and decoder with three hidden layers of (D, 128, 64, 32, N z ) and (N z , 32, 64, 128, D) with rectified linear unit (ReLU) activation functions.…”
Section: Experiments With Simulated Annealingmentioning
confidence: 99%
“…The focus of [4] is to optimize the mean squared error (MSE) and multiscale structural similarity for image quality assessment (MS-SSIM) between decompressed images and the originals. In [6], the images are compressed through an encoder, and a traditional quantization method is applied to reduce the bitrate. GAN-based models are widely explored in image compression tasks.…”
Section: Image Compressionmentioning
confidence: 99%