Content-based retrieval in news video databases has become an important task with the availability of large quantities of data in both public and proprietary archives. We describe a relevance feedback technique that captures the significance of different features at different spatial locations in an image. Spatial content is modeled by partitioning images into nonoverlapping grid cells. Contributions of different features at different locations are modeled using weights defined for each feature in each grid cell. These weights are iteratively updated based on user's feedback in terms of positive and negative labeling of retrieval results. Given this labeling, the weight updating scheme uses the ratios of standard deviations of the distances between relevant and irrelevant images to the standard deviations of the distances between relevant images. The proposed technique is quantitatively and qualitatively evaluated using shots related to several sports from the news video collection of the TRECVID video retrieval evaluation where the weights could capture relative contributions of different features and spatial locations.
Abstract. We describe an annotation and retrieval framework that uses a semantic image representation by contextual modeling of images using occurrence probabilities of concepts and objects. First, images are segmented into regions using clustering of color features and line structures. Next, each image is modeled using the histogram of the types of its regions, and Bayesian classifiers are used to obtain the occurrence probabilities of concepts and objects using these histograms. Given the observation that a single class with the highest probability is not sufficient to model image content in an unconstrained data set with a large number of semantically overlapping classes, we use the concept/object probabilities as a new representation, and perform retrieval in the semantic space for further improvement of the categorization accuracy. Experiments on the TRECVID and Corel data sets show good performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.