Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensiry vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks-such as enhancement, classification, halftoning, and edge detectio-nd to reduce the computational complexiry by performing them simultaneously with the compression. After briefly reviewing the fundamental ideas of vector quantization, we present a survey of vector quantization algorithms that perform image processing.
Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and low-level classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and tree-structured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for the design and encoding. We introduce a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design. For two different image sources, we demonstrate that this system provides superior classification while maintaining compression close or superior to that of several other VQ-based designs, including Kohonen's (1992) "learning vector quantizer" and a sequential quantizer/classifier design.
This paper describes an object-based video coding scheme that was proposed as part of the Texas Instruments' proposal to the emerging ISO MPEG-4 video compression standard. This technique achieves e cient compression by separating coherently moving objects from stationary background and compactly representing their shape, motion and the content. In addition to providing improved coding e ciency at very low bit rates the technique provides the ability to selectively encode, decode and manipulate individual objects in a video stream. This technique supports all the three MPEG4 functionalities that were tested in Nov '95 tests namely, improved coding e ciency, error resilience and content scalability. This paper also describes the error protection and concealment schemes that enable robust transmission of compressed video over noisy communication channels such as analog phone lines and wireless links. The noise introduced by the communication channel is characterized by both burst errors and random bit errors. Applications of this objectbased video coding technology include videoconferencing, video telephony, desktop multimedia and surveillance video.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.