2018
DOI: 10.1007/978-3-030-04167-0_9
|View full text |Cite
|
Sign up to set email alerts
|

DeepSIC: Deep Semantic Image Compression

Abstract: Incorporating semantic information into the codecs during image compression can significantly reduce the repetitive computation of fundamental semantic analysis (such as object recognition) in client-side applications. The same practice also enable the compressed code to carry the image semantic information during storage and transmission. In this paper, we propose a concept called Deep Semantic Image Compression (DeepSIC) and put forward two novel architectures that aim to reconstruct the compressed image and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(28 citation statements)
references
References 39 publications
0
26
0
Order By: Relevance
“…Recently, there have also been some efforts in combining some computer vision tasks and image compression in one framework. In [9,18], the authors tried to use the feature maps from learning-based image compression to help other tasks such as image classification and semantic segmentation although the results from other tasks were not used to help the compression part. In [2], a segmentation map-based image synthesis model was proposed, which targeted extremely low bit rates (< 0.1 bits/pixel), and used synthesized images for non-important regions.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there have also been some efforts in combining some computer vision tasks and image compression in one framework. In [9,18], the authors tried to use the feature maps from learning-based image compression to help other tasks such as image classification and semantic segmentation although the results from other tasks were not used to help the compression part. In [2], a segmentation map-based image synthesis model was proposed, which targeted extremely low bit rates (< 0.1 bits/pixel), and used synthesized images for non-important regions.…”
Section: Related Workmentioning
confidence: 99%
“…where V ∈ R N ×M ×C is the feature tensor with N rows, M columns, and C channels at the point of split, V is the quantized feature tensor, and min(V) and max(V) are the minimum and maximum value in V, respectively. In the studies performed so far [3], [4], [7], this uniform n-bit quantization was shown to have negligible effect on image classification and object detection accuracy, for n ≥ 6. For this reason, when such uniform quantizer is followed up by a lossless encoder, we refer to the resulting approach as near-lossless compression.…”
Section: Introductionmentioning
confidence: 89%
“…If the block is at the left (top) boundary, p j * is used as the left (top) value. The two Fil modes are based on 3tap filters with coefficients [3,7,22]/32 or [14, 0, 18]/32 [13], and use the top-left, top, and left feature values to predict the current value. Again, at the boundaries, the unavailable values are replaced by p j * .…”
Section: Deep Feature Compressionmentioning
confidence: 99%
See 1 more Smart Citation
“…The similarity is measured by perceptual loss based on the feature map of a pretrained AlexNet. In [18,2], the feature maps were extracted from learning-based image compression to complete image segmentation or image classification. Based on the work in [11,18,2], the work [2] used segmentation map-based image synthesis to train the discriminator of GAN, which used synthesized images for non-important regions and obtained very good performance at low bite rates (<0.1 bits/pixel).…”
Section: Introductionmentioning
confidence: 99%