Proceedings Ninth IEEE International Conference on Computer Vision 2003
DOI: 10.1109/iccv.2003.1238312
|View full text |Cite
|
Sign up to set email alerts
|

Segmenting foreground objects from a dynamic textured background via a robust Kalman filter

Abstract: The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2005
2005
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 172 publications
(8 citation statements)
references
References 22 publications
(25 reference statements)
0
8
0
Order By: Relevance
“…• We evaluate our work on six methods at both pixel and object levels. Furthermore, we test them on a set of sequences from different datasets, namely CDNet [14,15], LASIESTA [16] and J. Zhong and S. Sclaroff [17]. • Our results show a considerable increase in object-wise performance metrics, while also improving or maintaining the pixel-wise results.…”
Section: Introductionmentioning
confidence: 87%
See 2 more Smart Citations
“…• We evaluate our work on six methods at both pixel and object levels. Furthermore, we test them on a set of sequences from different datasets, namely CDNet [14,15], LASIESTA [16] and J. Zhong and S. Sclaroff [17]. • Our results show a considerable increase in object-wise performance metrics, while also improving or maintaining the pixel-wise results.…”
Section: Introductionmentioning
confidence: 87%
“…Results on sequences from J. Zhong and S. Sclaroff [17]. Lastly, we evaluated our method on two sequences from J. Zhong and S. Sclaroff [17] containing challenging dynamic backgrounds.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In our implementation, YOLOv5s [32] is employed to detect the foreground and extract the object representation parameters. Different from the traditional foreground extraction method [33][34][35], the YOLOv5s based on deep learning can obtain semantics of the foreground including object class and location. The original YOLOv5s is trained on COCO datasets [36] and the objects of the apron surveillance video are not fully covered.…”
Section: Extracting Foreground Objects and Backgroundmentioning
confidence: 99%
“…Traditional background subtraction schemes such as AMF (Approximated Median Filter) [12], Kalman filter [13], and single Gaussian filter [14] reflect some irrelevant pixels on the foreground due to lack of correlation between the spatial and temporal constraints in their background maintenance schemes. Nevertheless, adjusting the learning rate to background pixels is another potential problem in background maintenance [15].…”
Section: Introductionmentioning
confidence: 99%