The problem of generative process tracking involves detecting and adapting to changes in the underlying generative process that creates a time series of observations. It has been widely used for visual background modelling to adaptively track the generative process that generates the pixel intensities. In this paper, we extend this idea to audio background modelling and show its applications in surveillance domain. We adaptively learn the parameters of the generative audio background process and detect foreground events. We have tested the effectiveness of the proposed algorithms using synthetic time series data and show its performance on elevator audio surveillance.
IEEE International Conference on Acoutiscs, Speech and Signal Processing (ICASSP)This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved.
GENERATIVE PROCESS TRACKING FOR AUDIO ANALYSIS
Regunathan Radhakrishnan and Ajay DivakaranMitsubishi Electric Research Laboratory, Cambridge, MA 02139 E-mail: {regu, ajayd}@merl.com
ABSTRACTThe problem of generative process tracking involves detecting and adapting to changes in the underlying generative process that creates a time series of observations. It has been widely used for visual background modelling to adaptively track the generative process that generates the pixel intensities. In this paper, we extend this idea to audio background modelling and show its applications in surveillance domain. We adaptively learn the parameters of the generative audio background process and detect foreground events. We have tested the effectiveness of the proposed algorithms using synthetic time series data and show its performance on elevator audio surveillance.