2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops 2014
DOI: 10.1109/cvprw.2014.67
|View full text |Cite
|
Sign up to set email alerts
|

Flexible Background Subtraction with Self-Balanced Local Sensitivity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 97 publications
(69 citation statements)
references
References 16 publications
0
69
0
Order By: Relevance
“…The bio-inspired motion segmentation module is running with no configurable parameters. For evaluation purposes, several alternatives were used as BS method: MOG2, refers to the masks outputted by MOG2, available in OpenCV, using default parameters; GMM [7], KNN [7], AMBER [11], CwisarDH [12], Spectral360 [13], SuBSENSE [14] and FTSG [15] refer to the computed masks made available in the CDnet site [9]. These masks were generated with the parameters adjusted to maximize overall performance.…”
Section: Methodsmentioning
confidence: 99%
“…The bio-inspired motion segmentation module is running with no configurable parameters. For evaluation purposes, several alternatives were used as BS method: MOG2, refers to the masks outputted by MOG2, available in OpenCV, using default parameters; GMM [7], KNN [7], AMBER [11], CwisarDH [12], Spectral360 [13], SuBSENSE [14] and FTSG [15] refer to the computed masks made available in the CDnet site [9]. These masks were generated with the parameters adjusted to maximize overall performance.…”
Section: Methodsmentioning
confidence: 99%
“…This is mainly due to imperfect foreground/background segmentation errors and calibration errors. In simulations we produced perfect silhouettes as input for our algorithm whereas in this realworld example we took the output of the foreground/background segmentation method SUBSENSE [St-Charles et al, 2014]. This method performs among the best in the CDnet 2014 Change Detection benchmark [Wang et al, 2014], but still shows significant errors on the produced silhouettes.…”
Section: Real World Datamentioning
confidence: 99%
“…Texture-based features, such as a local binary pattern [4], are often used to adapt to illumination changes. In addition, some features and background models are combined to enhance the robustness to background changes [5,6]. However, these heuristic approaches work well only for scenes containing background features designed by the researchers.…”
Section: Introductionmentioning
confidence: 99%