In this paper we present a systematic study of automatic classification of consumer videos into a large set of diverse semantic concept classes, which have been carefully selected based on user studies and extensively annotated over 1300+ videos from real users. Our goals are to assess the state of the art of multimedia analytics (including both audio and visual analysis) in consumer video classification and to discover new research opportunities. We investigated several statistical approaches built upon global/local visual features, audio features, and audio-visual combinations. Three multi-modal fusion frameworks (ensemble, context fusion, and joint boosting) are also evaluated. Experiment results show that visual and audio models perform best for different sets of concepts. Both provide significant contributions to multimodal fusion, via expansion of the classifier pool for context fusion and the feature bases for feature sharing. The fused multimodal models are shown to significantly reduce the detection errors (compared to single modality models), resulting in a promising accuracy of 83% over diverse concepts. To the best of our knowledge, this is the first work on systematic investigation of multimodal classification using a large-scale ontology and realistic video corpus.
Semantic indexing of images and videos in the consumer domain has become a very important issue for both research and actual application. In this work we developed Kodak's consumer video benchmark data set, which includes (1) a significant number of videos from actual users, (2) a rich lexicon that accommodates consumers' needs, and (3) the annotation of a subset of concepts over the entire video data set. To the best of our knowledge, this is the first systematic work in the consumer domain aimed at the definition of a large lexicon, construction of a large benchmark data set, and annotation of videos in a rigorous fashion. Such effort will have significant impact by providing a sound foundation for developing and evaluating large-scale learningbased semantic indexing/annotation techniques in the consumer domain.
Video analytics have recently emerged as a promising technique of retail fraud detection for loss prevention. Efficient video analytic algorithms are highly desired for a practical fraud detection system. In this paper, we present a real-time algorithm for recognizing a cashier's actions at the Point of Sale (POS), which can be further used to analyze cashier behaviors for identifying fraudulent incidents. The algorithm uses a set of simple but effective features derived from a global representation of motion energy called Polar Motion Map (PMM). These features capture the motion patterns exhibited in a cashier's actions as a focused beam of motion energy, characterizing the actions as the extension and retraction movement of the cashier's arm with respect to a prespecified region. Our algorithm demonstrates comparable accuracy against one of the state-of-the-art event recognition techniques [1] while running significantly faster.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.