This work presents Wavelet-Modified Single Layer Linear Forward Only Counter Propagation Network (MSLLFOCPN) technique to solve image compression. In this technique, it inherits the properties of localizing the global spatial and frequency correlation from wavelets. Function approximation and prediction are obtained from neural networks. As a result counter propagation network was considered for its superior performance and the research enable us to propose a new neural network architecture named single layer linear counter propagation network (SLLC). The combination of wavelet and SLLC network were tested on several benchmark images and the experimental results shows that an enhancement in picture quality, compression ratio and approximation or prediction comparable to existing and traditional neural networks.
Abstract:Compression basically employs redundancy in the data. Compression is the process of reducing the size of a file by encoding its data information more efficiently. By doing this, the result is a reduction in the number of bits and bytes used to store the information. In effect, a smaller file size is generated in order to achieve a faster transmission of electronic files and a smaller space required for its downloading. Compression is done by using compression algorithms that rearrange and reorganize data information so that it can be stored more economically. This is done by using a compression/decompression program that alters the structure of the data temporarily for transporting, reformatting, archiving, saving, etc. Lossless Compression is a type of compression that can reduce files without a loss of information in the process. The original file can be recreated exactly when uncompressed. Huffman compression is a lossless compression algorithm that is ideal for compressing text or program files. This probably explains why it is used a lot in compression programs like ZIP or ARJ. In this article we represent principles of Huffman code which is used for compression. Huffman coding is based on the frequency of occurrence of a data item. The principle is to use a lower number of bits to encode the data that occurs more frequently. Codes are stored in a Code Book which may be constructed for each image or a set of images. In all cases the code book plus encoded data must be transmitted to enable decoding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.