2015
DOI: 10.1016/j.patcog.2014.10.020
|View full text |Cite
|
Sign up to set email alerts
|

Co-occurrence probability-based pixel pairs background model for robust object detection in dynamic scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 61 publications
(41 citation statements)
references
References 22 publications
0
41
0
Order By: Relevance
“…[21] modeled appearance changes by incrementally learning a tensor subspace representation by adaptively updating the sample mean and an eigenbasis for each unfolding matrix. In our previous research, we pay attention on co-occurrence pixel-pair background models [22,23,24,25]. The models employed an alignment of supporting pixels for the target pixel which held a stable intensity subtraction in training frames without any restriction of locations.…”
Section: Related Workmentioning
confidence: 99%
“…[21] modeled appearance changes by incrementally learning a tensor subspace representation by adaptively updating the sample mean and an eigenbasis for each unfolding matrix. In our previous research, we pay attention on co-occurrence pixel-pair background models [22,23,24,25]. The models employed an alignment of supporting pixels for the target pixel which held a stable intensity subtraction in training frames without any restriction of locations.…”
Section: Related Workmentioning
confidence: 99%
“…The results are compared with 6 state-of-the-art methods, including MSSTBM [27], GMM-Zivkovic [56], CP3-Online [25], GMM-Stauffer [39], KDE-Elgammal [11] and RMoG [40] by using implementations of the original authors. Foreground detection is compared using Average F measure across all the video sequences within each category.…”
Section: Evaluation Of Deep Context Prediction (Dcp) For Foreground Dmentioning
confidence: 99%
“…In addition to the low-level image features, i.e., grayscale, color intensity and edge magnitudes [1,2,[5][6][7][8], specific feature descriptors can be designed for enhanced performance [3][4]. The background modelling techniques in the literature are loosely categorized into parametric [10][11][12][13] and non-parametric [14][15][16][17][18][19][20] techniques. A detailed classification of background modelling techniques can be found in [9].…”
Section: Related Workmentioning
confidence: 99%