2020
DOI: 10.48550/arxiv.2009.12927
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Improve Image Compression without Changing the Standard Decoder

Abstract: In recent years we have witnessed an increasing interest in applying Deep Neural Networks (DNNs) to improve the rate-distortion performance in image compression. However, the existing approaches either train a post-processing DNN on the decoder side, or propose learning for image compression in an end-to-end manner. This way, the trained DNNs are required in the decoder, leading to the incompatibility to the standard image decoders (e.g., JPEG) in personal computers and mobiles. Therefore, we propose learning … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(11 citation statements)
references
References 27 publications
(53 reference statements)
0
11
0
Order By: Relevance
“…A more straightforward compression approach considered redundancy at the decoder side of the system and attempted to decompress by designing an iterative hybrid recurrent decoder [6,7,8]. Similarly, a standard encoder can be replaced with another DNN to enhance the model's internal neural representations and decode information while only using a standard decoder both in the pixel [24] and frequency domains [10].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…A more straightforward compression approach considered redundancy at the decoder side of the system and attempted to decompress by designing an iterative hybrid recurrent decoder [6,7,8]. Similarly, a standard encoder can be replaced with another DNN to enhance the model's internal neural representations and decode information while only using a standard decoder both in the pixel [24] and frequency domains [10].…”
Section: Related Workmentioning
confidence: 99%
“…Editing with Sparse RNNs: Inspired by [26] and [10], we design an approach that uses a neural models to either pre-edit (or iteratively process) an image I before the quantization step of JPEG or post-edit the inverse DCT coefficients before converting back to the reconstructed image Î. Specifically, for pre-editing, our neural encoder E Θ (I) proceeds according to the following steps to produce a set of "edit" weights:…”
Section: Iterative Refinement Stepsmentioning
confidence: 99%
See 2 more Smart Citations
“…A simpler approach to the compression approach takes into account redundancy at the decoder side of the system and attempts to iteratively decompress using a hybrid recurrent decoder [1,2]. Similarly, a standard encoder can be replaced with another DNN to enhance model representation and decode information using a standard decoder [20]. Prior efforts have shown the limitations of backprop based approaches on standard computer vision tasks [21,22,23] as well as natural language processing benchmarks [24].…”
Section: Related Workmentioning
confidence: 99%