2021
DOI: 10.1109/ojcas.2021.3123201
|View full text |Cite
|
Sign up to set email alerts
|

ANFIC: Image Compression Using Augmented Normalizing Flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 11 publications
0
30
0
Order By: Relevance
“…Henter, Gustav Eje et al [57] proposed a model for probability, generation, and controllable motion data based on normalizing flows with MoGlow. The model is used to generate motion data sequences from normalizing flows so that it cannot only describe the distribution with high complexity but can use accurate maximum likelihood for effective training; Ho, YH et al [58] built an end-to-end learning image compression system based on a novel flow model Augmented Normalizing Flows (ANF) and stacked with multiple Variational Auto-encoders (VAEs) is built.…”
Section: Related Workmentioning
confidence: 99%
“…Henter, Gustav Eje et al [57] proposed a model for probability, generation, and controllable motion data based on normalizing flows with MoGlow. The model is used to generate motion data sequences from normalizing flows so that it cannot only describe the distribution with high complexity but can use accurate maximum likelihood for effective training; Ho, YH et al [58] built an end-to-end learning image compression system based on a novel flow model Augmented Normalizing Flows (ANF) and stacked with multiple Variational Auto-encoders (VAEs) is built.…”
Section: Related Workmentioning
confidence: 99%
“…Architecture: Motivated by [13], our conditional inter-frame coder is a hybrid of the two-step and the hierarchical ANF's. The two autoencoding transforms {g enc π1 , g dec π1 }, {g enc π2 , g dec π2 } convert x t , e z into their latents y 2 , z 2 , respectively, while the hierarchical autoencoding transform {h enc π3 , h dec π3 } acts as the hyperprior codec, encoding the latent z 2 into the hyperprior representation ĥ2 .…”
Section: Canf-based Inter-frame Codermentioning
confidence: 99%
“…Many follow-up works have been centered around enhancing the autoencoder network [9,8] and/or improving the prior modeling [30,9]. Lately, there have been few attempts at introducing normalizing flow models [28,13] to learned image compression Inspired by the success of learned image compression, research on learned video compression is catching up quickly. However, most end-to-end learned video compression systems [26,27,24,14,32] were developed based primarily on the traditional, hybrid-based coding architecture, replacing key components, such as inter-frame prediction and residual coding, with neural networks.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations