2007 IEEE Conference on Computer Vision and Pattern Recognition 2007
DOI: 10.1109/cvpr.2007.383517
|View full text |Cite
|
Sign up to set email alerts
|

Progressive Learning for Interactive Surveillance Scenes Retrieval

Abstract: This paper tackles the challenge of interactively retrieving visual scenes within surveillance sequences acquired with fixed camera. Contrarily to today's solutions, we assume that no a-priori knowledge is available so that the system must progressively learn the target scenes thanks to interactive labelling of a few frames by the user.The proposed method is based on very low-cost features extraction and integrates relevance feedback, multiple-instance SVM classification and active learning. Each of these 3 st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…To our knowledge, this scenario has never been studied in the literature. The few existing MIAL methods focus on bag classification [18][19][20] or select groups of instances in a scenario where there is only one query round [21].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To our knowledge, this scenario has never been studied in the literature. The few existing MIAL methods focus on bag classification [18][19][20] or select groups of instances in a scenario where there is only one query round [21].…”
Section: Introductionmentioning
confidence: 99%
“…This paper focuses on methods that are suitable for MIAL problems. Although several AL methods exist for single instance learning [17], only a handful of methods have been proposed to address MIAL problems [18][19][20][21]. Single instance active learning (SIAL) methods are not suitable for MIL because: 1) in MIL, instances are grouped in sets or bags, and 2) training instances have weak labels.…”
mentioning
confidence: 99%
“…Therefore, relevance feedback requires to define which part in a result is relevant (or irrelevant). The technique of Meessen et al 19 has removed the dynamic aspect by extracting keyframes from videos and applying relevance feedback technique on these keyframes. Chen et al 6 has limited accident event to one sole relation of vehicles' trajectories.…”
Section: Related Work In Surveillance Video Indexing and Retrievalmentioning
confidence: 99%
“…The multiple-instance learning algorithm utilizes the class label from bags for predicting the class label for unseen bags as well as instances. The incorporation of relevance feedback and multiple-instance learning perfectly matches the surveillance video retrieval scenario [10,11] since the former reduces the semantic gap by incorporating the user's high-level perception and gathering training samples in a progressive manner, while the latter guesses the event of interest by analyzing the collected training samples based on incomplete training label information. It should be pointed out that the proposed framework is different from [11] in that our framework not only incorporates RF and MIL into the retrieval, but also tightly integrates them with CHMM and takes advantage of CHMM to model various kinds of human interactions.…”
Section: Introductionmentioning
confidence: 95%
“…al [2] proposes a framework that learns the incident models by clustering trajectories hierarchically with the use of spatial and temporal information [2]. Very few other frameworks borrow the concept of relevance feedback (RF) from the content-based image retrieval (CBIR) to learn the incident models in a progressive fashion, including our previous work [10] and the work proposed by Meessen et al [11]. As a supervised learning technique, relevance feedback incorporates the subjective perceptions from users with the learning process, which significantly increases the retrieval accuracy.…”
Section: Introductionmentioning
confidence: 99%