A digital image is a numerical representation of visual perception that can be manipulated according to specifications. In order to reduce the cost of storage and transmission, digital images are compressed. Depending upon the quality of reconstruction, compression methods are categorized as Lossy and Lossless compression. The lossless image compression techniques, where the exact recovery of data is possible, is the most challenging task considering the tradeoff between the compression ratio achieved and the quality of reconstruction. The inherent data redundancies like interpixel redundancy and coding redundancy in the image are exploited for this purpose. The interpixel redundancy is treated by decorrelation using Run-length Encoding, Predictive Coding, and other Transformation Coding techniques. While entropy-based coding can be achieved using Huffman codes, arithmetic codes, and the LZW algorithm, which eliminates the coding redundancy. During the implementation of these sequential coding algorithms, the direction used for data scanning plays an important role. A study of various image compression techniques using sequential coding schemes is presented in this paper. The experimentation on 100 gray-level images comprising 10 different classes is carried out to understand the effect of the direction of scanning of data on its compressibility. Depending upon this study the interrelation between the maximum length of the Run and compression achieved similarly the resultant number of Tuples and compression achieved is reported. Considering the fuzzy nature of these relations, fuzzy composition operations like max-min, min-max, and max-mean compositions are used for decision-making. In this way, a rational comment on which data scanning direction is suitable for a particular class of images is made in the conclusion.
Working at Bell Labs in 1950, irritated with error-prone punched card readers, R W Hamming began working on error-correcting codes, which became the most used error-detecting and correcting approach in the field of channel coding in the future. Using this parity-based coding, two-bit error detection and one-bit error correction was achievable. Channel coding was expanded further to correct burst errors in data. Depending upon the use of the number of data bits ‘d’ and parity bits ‘k’ the code is specified as (n, k) code, here ‘n’ is the total length of the code (d+k). It means that 'k' parity bits are required to protect 'd' data bits, which also means that parity bits are redundant if the code word contains no errors. Due to the framed relationship between data bits and parity bits of the valid codewords, the parity bits can be easily computed, and hence the information represented by 'n' bits can be represented by 'd' bits. By removing these unnecessary bits, it is possible to produce the optimal (i.e., shortest length) representation of the image data. This work proposes a digital image compression technique based on Hamming codes. Lossless and near-lossless compression depending upon need can be achieved using several code specifications as mentioned here. The achieved compression ratio, computational cost, and time complexity of the suggested approach with various specifications are evaluated and compared, along with the quality of decompressed images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.