2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00563
|View full text |Cite
|
Sign up to set email alerts
|

ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 106 publications
(45 citation statements)
references
References 24 publications
0
45
0
Order By: Relevance
“…These invalid pixels are caused by the absence of LiDAR laser signal return. In our prediction design, we leverage this property The residual coding network g a and g s consist of stacks of convolutions, residual blocks [9] and attention blocks [10]. For more details about the hyperprior model, please refer to [11].…”
Section: Inter-frame Predictionmentioning
confidence: 99%
See 2 more Smart Citations
“…These invalid pixels are caused by the absence of LiDAR laser signal return. In our prediction design, we leverage this property The residual coding network g a and g s consist of stacks of convolutions, residual blocks [9] and attention blocks [10]. For more details about the hyperprior model, please refer to [11].…”
Section: Inter-frame Predictionmentioning
confidence: 99%
“…The prior handcrafted methods are not capable of fully exploiting redundancy, leading to a poor compression ratio. Our learning-based approach is motivated by the recent research on learned color image compression [9,10,11,18]. A variational autoencoder (VAE) is adopted to achieve transform coding.…”
Section: Residual Frame Codingmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, end-to-end image compression has engaged increasing interests, which is built upon the transform coding paradigm with nonlinear transform and powerful entropy models for higher compression efficiency. Nonlinear transform is used to produce compact representations, such as generalized divisive normalization (GDN) (Ballé et al, 2015), the self-attention block (Cheng et al, 2020), wavelet-like invertible transform (Ma et al, 2020) and stacks of residual bottleneck blocks (He et al, 2022). To approximate the distribution of latent representations, many advanced entropy models have been proposed.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, image coding methods based on end-toend optimization have been rapidly explored and developed, and show promise to become the next-generation coding standard. On the one hand, with the powerful image understanding and generation capabilities of deep learning, some very recent works outperform VVC in PSNR and MS-SSIM (Gao et al 2021;Guo et al 2021a;Xie, Cheng, and Chen 2021;Chen, Xu, and Wang 2022;He et al 2022). On the other hand, to meet the needs of industrial applications, researchers have designed flexible modules to implement variable bitrate (Cui et al 2021) and scalable coding (Guo, Zhang, and Chen 2019).…”
Section: Introductionmentioning
confidence: 99%