The hidden Markov model (HMM) is widely used to model processes in several real world applications, including speech processing and recognition, image understanding and sensor networks. A problem of concern is that of quantization of the sequence of observations generated by an HMM, which is referred as a hidden Markov source (HMS). Despite the importance of the problem, and the well-defined structure of the process, there has been very limited work addressing the optimal quantization of HMS, and conventional approaches focus on optimization of parameters of known quantization schemes. This paper proposes a method that directly tracks the state probability distribution of the underlying source and optimizes the encoder structure according to the estimated HMS status. Unlike existing approaches, no stationarity assumption is needed, and code parameters are updated on the fly: with each observation, both the encoder and the decoder refine the estimated probability distribution over the states. The main approach is then specialized to a practical variant involving switched quantizers, and an algorithm that iteratively optimizes the quantizer codebooks is derived. Numerical results show superiority of the proposed approach over prior methods.
IntroductionThe Hidden Markov model is a discrete-time finite state Markov chain observed through a memoryless channel. The random process consisting of the sequence of observations is referred to as a hidden Markov source (HMS). Markov chains are common models for information sources with memory and memoryless channel is among the simplest communication models. HMMs are widely used in image understanding and speech recognition [1], source coding [2], communications, information theory, economics, robotics, computer vision and several other disciplines. Note that most signals modeled as Markov process are in fact captured by imperfect sensors and are hence contaminated with noise, i.e., the resulting sequence is an HMS. Motivated by its modeling capability of practical sources with memory, in this paper we consider optimal quantization of the HMS . One conventional approach to design quantizers for the HMS is to employ predictive coding techniques (such as DPCM) to exploit time correlations, followed by standard scalar quantization of the residual. However, any direct prediction method cannot fully exploit the structure of the source process, i.e., the underlying Markov process. Indeed, even if the underlying process is first order Markov (depends on the past only through the previous sample), the HMS is not Markov, i.e, all prior samples are needed to optimally predict the current sample. But a naive infinite order prediction is clearly of impractical complexity. An alternative to predictive coding is encoding by a finite-state quantizer (see eg. [3,4]