2020
DOI: 10.1109/tmm.2019.2938345
|View full text |Cite
|
Sign up to set email alerts
|

Energy Compaction-Based Image Compression Using Convolutional AutoEncoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 70 publications
(35 citation statements)
references
References 20 publications
0
30
0
Order By: Relevance
“…The overhead can be calculated as the third term in the following equation. We compare our results with [27], [18] and [28]. In [27], the kernel size for the main path is 5 × 5 which is same with ours.…”
Section: Memory Consumption Evaluationmentioning
confidence: 83%
See 2 more Smart Citations
“…The overhead can be calculated as the third term in the following equation. We compare our results with [27], [18] and [28]. In [27], the kernel size for the main path is 5 × 5 which is same with ours.…”
Section: Memory Consumption Evaluationmentioning
confidence: 83%
“…As a result, at the constraint of the same bit-budget, the QP for large λ is degraded. We also compare with [18] and three traditional standards that are JPEG, JPEG2000 and BPG. For the MS-SSIM, the quantized version can outperform BPG and [18] significantly.…”
Section: B Coding Gain Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…This has lead to its incorporation in recent data compression standards and its use in many different cases [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. In addition, ANS-based encoders could be applicable to a very wide range of multimedia scenarios, such as an alternative to the Rice-Golomb codes employed in the energy-efficient scheme described in [20], as a high-throughput entropy encoder in a high frame rate video format [21], or in general as an entropy encoder in schemes for sparse coding [22], learned image compression [23], compressive sensing [24], or point cloud data compression [25]. In [26], ANS is employed to code large-alphabets sources.…”
Section: Introductionmentioning
confidence: 99%
“…In [21, 22], the authors discussed the usage of ANN for image compression. The applicability of the CNN for image compression is explored in [2326]. Furthermore, methods that utilise sparse representation based on dictionary learning algorithms have proven to be effective techniques for image compression applications [27–29].…”
Section: Introductionmentioning
confidence: 99%