2021
DOI: 10.1109/tip.2021.3096085
|View full text |Cite
|
Sign up to set email alerts
|

Better Compression With Deep Pre-Editing

Abstract: Could we compress images via standard codecs while avoiding visible artifacts? The answer is obvious -this is doable as long as the bit budget is generous enough. What if the allocated bit-rate for compression is insufficient? Then unfortunately, artifacts are a fact of life. Many attempts were made over the years to fight this phenomenon, with various degrees of success. In this work we aim to break the unholy connection between bit-rate and image quality, and propose a way to circumvent compression artifacts… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 47 publications
0
9
0
Order By: Relevance
“…A more straightforward compression approach considered redundancy at the decoder side of the system and attempted to decompress by designing an iterative hybrid recurrent decoder [6,7,8]. Similarly, a standard encoder can be replaced with another DNN to enhance the model's internal neural representations and decode information while only using a standard decoder both in the pixel [24] and frequency domains [10].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A more straightforward compression approach considered redundancy at the decoder side of the system and attempted to decompress by designing an iterative hybrid recurrent decoder [6,7,8]. Similarly, a standard encoder can be replaced with another DNN to enhance the model's internal neural representations and decode information while only using a standard decoder both in the pixel [24] and frequency domains [10].…”
Section: Related Workmentioning
confidence: 99%
“…Editing with Sparse RNNs: Inspired by [26] and [10], we design an approach that uses a neural models to either pre-edit (or iteratively process) an image I before the quantization step of JPEG or post-edit the inverse DCT coefficients before converting back to the reconstructed image Î. Specifically, for pre-editing, our neural encoder E Θ (I) proceeds according to the following steps to produce a set of "edit" weights:…”
Section: Iterative Refinement Stepsmentioning
confidence: 99%
“…To overcome this shortcoming, this paper proposes improving rate-distortion performance without modifying the standard JPEG decoder. As far as we know, Talebi et al [29] is the only work on pre-editing before JPEG compression, which trains a DNN in the pixel domain before the JPEG encoder to pre-edit input images. Different from [29], we propose learning to improve the JPEG encoder in the frequency domain, i.e., learning an attention map to apply spatial weighting to the DCT coefficients and learning the quantization tables to optimize rate-distortion performance.…”
Section: Related Workmentioning
confidence: 99%
“…We propose a novel approach to preediting the image before quantization to improve the compression quality. [29] has shown that an image smoothing network before compressing the image improves the compression performance. We also employ a smoothing mechanism that acts on the DCT coefficients directly.…”
Section: Proposed Network Architecturementioning
confidence: 99%
See 1 more Smart Citation