In disease diagnosis, medical image plays an important part. Its lossless compression is pretty critical, which directly determines the requirement of local storage space and communication bandwidth of remote medical systems, so as to help the diagnosis and treatment of patients. There are two extraordinary properties related to medical images: lossless and similarity. How to take advantage of these two properties to reduce the information needed to represent an image is the key point of compression. In this paper, we employ the big data mining to set up the image codebook. That is, to find the basic components of images. We propose a soft compression algorithm for multi-component medical images, which can exactly reflect the fundamental structure of images. A general representation framework for image compression is also put forward and the results indicate that our developed soft compression algorithm can outperform the popular benchmarks PNG and JPEG2000 in terms of compression ratio.
Soft compression is a lossless image compression method that is committed to eliminating coding redundancy and spatial redundancy simultaneously. To do so, it adopts shapes to encode an image. In this paper, we propose a compressible indicator function with regard to images, which gives a threshold of the average number of bits required to represent a location and can be used for illustrating the working principle. We investigate and analyze soft compression for binary image, gray image and multi-component image with specific algorithms and compressible indicator value. In terms of compression ratio, the soft compression algorithm outperforms the popular classical standards PNG and JPEG2000 in lossless image compression. It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images (such as medical images) can be greatly reduced with applying soft compression.
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which integrates simple things into a complex system and even generates intelligence, is truly consistent with the evolution of the human language system. We apply this idea to the semantic communication system, quantifying semantic transmission by symbol sequences and investigating the semantic information system in a similar way as Shannon’s method for digital communication systems. This work is the first to discuss semantic expansion and knowledge collision in the semantic information framework. Some important theoretical results are presented, including the relationship between semantic expansion and the transmission information rate. We believe such a semantic information framework may provide a new paradigm for semantic communications, and semantic expansion and knowledge collision will be the cornerstone of semantic information theory.
Semantic communication is not obsessed with improving the accuracy of transmitted symbols, but is concerned with expressing the desired meaning that the symbol sequence exactly carried. However, the generation and measurement of semantic messages are still an open problem. Expansion combines simple things into complex systems and even generates intelligence, which is consistent with the evolution of the human language system. We apply this idea to semantic communication system, quantifying and transmitting semantics by symbol sequences, and investigate the semantic information system in a similar way as Shannon did for digital communication systems. This work was the first to propose the concept of semantic expansion and knowledge collision, which may provide a new paradigm for semantic communications. We believe that expansion and collision will be the cornerstone of semantic information theory.
This paper focuses on the ultimate limit theory of image compression. It proves that for an image source, there exists a coding method with shapes that can achieve the entropy rate under a certain condition where the shape-pixel ratio in the encoder/decoder is O(1/logt). Based on the new finding, an image coding framework with shapes is proposed and proved to be asymptotically optimal for stationary and ergodic processes. Moreover, the condition O(1/logt) of shape-pixel ratio in the encoder/decoder has been confirmed in the image database MNIST, which illustrates the soft compression with shape coding is a near-optimal scheme for lossless compression of images.
With the advancement of intelligent vision algorithms and devices, image reprocessing and secondary propagation are becoming increasingly prevalent. A large number of similar images are being produced rapidly and widely, resulting in the homogeneity and similarity of images. Moreover, it brings new challenges to compression systems, which need to exploit the potential of deep features and side information of images. However, traditional methods are incompetent for this issue. Soft compression is a novel data-driven image coding algorithm with superior performance. Compared with existing paradigms, it has distinctive characteristics: from hard to soft, from pixels to shapes, and from fixed to random. Soft compression may hold promise for human-centric/data-centric intelligent systems, making them efficient and reliable and finding potential in metaverse and digital twins, etc. In this paper, we present a comprehensive and practical analysis of soft compression, revealing the functional role of each component in the system.
With the advancement of intelligent vision algorithms and devices, image reprocessing and secondary propagation are becoming increasingly prevalent. A large number of similar images are being produced rapidly and widely, resulting in the homogeneity and similarity of images. Moreover, it brings new challenges to compression systems, which need to exploit the potential of deep features and side information of images. However, traditional methods are incompetent for this issue. Soft compression is a novel data-driven image coding algorithm with superior performance. Compared with existing paradigms, it has distinctive characteristics: from hard to soft, from pixels to shapes, and from fixed to random. Soft compression may hold promise for human-centric/data-centric intelligent systems, making them efficient and reliable and finding potential in the metaverse and digital twins, etc. In this paper, we present a comprehensive and practical analysis of soft compression, revealing the functional role of each component in the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.