2018 IEEE International Conference on Multimedia and Expo (ICME) 2018
DOI: 10.1109/icme.2018.8486556
|View full text |Cite
|
Sign up to set email alerts
|

Deep Background Subtraction with Guided Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 12 publications
0
17
0
Order By: Relevance
“…Liang et al [111] developed a multi-scale CNN based background subtraction method by learning a specific CNN model for each video to ensure accuracy, but manage to avoid manual labeling. First, Liang et al [111] applied the SubSENSE algorithm to get an initial foreground mask.…”
Section: Multi-scale and Cascaded Cnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…Liang et al [111] developed a multi-scale CNN based background subtraction method by learning a specific CNN model for each video to ensure accuracy, but manage to avoid manual labeling. First, Liang et al [111] applied the SubSENSE algorithm to get an initial foreground mask.…”
Section: Multi-scale and Cascaded Cnnsmentioning
confidence: 99%
“…Liang et al [111] developed a multi-scale CNN based background subtraction method by learning a specific CNN model for each video to ensure accuracy, but manage to avoid manual labeling. First, Liang et al [111] applied the SubSENSE algorithm to get an initial foreground mask. Then, an adaptive strategy is applied to select reliable pixels to guide the CNN training because the outputs of SubSENSE cannot be directly used as ground truth due the lack of accuracy of the results.…”
Section: Multi-scale and Cascaded Cnnsmentioning
confidence: 99%
“…Recently, both CNN and foreground attentive neural network (FANN) models have been developed to perform foreground segmentation [62], [63]. In addition to conventional Gaussian mixture model (GMM)-based background subtraction, recent explorations have also shown that CNN models could be effectively used for the same purpose [64], [65]. To address these separated foreground objects and background attributes, Zhang et al [66] introduced a new background mode to more compactly represent background information with better R-D efficiency.…”
Section: A Saliency-based Video Preprocessingmentioning
confidence: 99%
“…In addition, Babaee et al [52] proposed a robust model in which a network is used to subtract the background from the current frame and only 5% of the labeled masks are utilized for training. Liang et al [53] utilized the foreground mask generated by the SubSENSE algorithm [15] rather than manual labeling for training, and Zeng et al [54] used a convolutional neural network to combine several background subtraction algorithms together. In our work, since the distributions of temporal pixels are captured from every spatial pixel, a large number of training instances can be captured with a limited number of ground truth frames.…”
Section: B Algorithms Based On Deep Learningmentioning
confidence: 99%
“…However, the comparison between FgSegNet and the proposed approach is unfair, since we only use 20 ground truth frames for training and the number of parameters in our network is much less than their network. In addition, there are a few semi-supervised algorithms (e.g., GuidedBS [53], BSUV-Net [90] and GraphMOS [56]) which did not utilize any ground truth frames from testing videos for training. However, these methods assumed a large number of binary masks from another video for training, and used several pretrained networks.…”
Section: Evaluation Of Arithmetic Distribution For Background Subtrac...mentioning
confidence: 99%