2020
DOI: 10.1049/iet-ipr.2019.0769
|View full text |Cite
|
Sign up to set email alerts
|

Deep detector classifier (DeepDC) for moving objects segmentation and classification in video surveillance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 54 publications
(77 reference statements)
0
19
0
Order By: Relevance
“…Precision, F1-score, accuracy, recall, sensitivity, equal error rate (EER), specificity, and receiver operating curve (ROC) are also frequently used as metrics in this area. San Francisco cabspotting: [115] SBHAR: [46] SD-OCT: [17] Sentence polarity: [68] ShanghAaiTech: [41,75] SIXray: [91] Spectralis OCT: [26] SWaT system: [93] SVHN: [50,62] TalkingData AdTracking: [113,67] Tennessee eastman: [16,28] Texas coast: [27] Thyroid: [132] UBA: [96] UCI: [38,126] UCSD: [21,22,37,39,41,54,64,65,122,74,75,79,90,107,109] Udacity: [56,61] UMN: [39,43,64,65,74,90,107] UNSW-NB15: [110] VIRAT: [81] WADI test-bed: [93] WOA13 month...…”
Section: Rq4: Which Type Of Data Instance and Datasets Are Most Commo...mentioning
confidence: 99%
“…Precision, F1-score, accuracy, recall, sensitivity, equal error rate (EER), specificity, and receiver operating curve (ROC) are also frequently used as metrics in this area. San Francisco cabspotting: [115] SBHAR: [46] SD-OCT: [17] Sentence polarity: [68] ShanghAaiTech: [41,75] SIXray: [91] Spectralis OCT: [26] SWaT system: [93] SVHN: [50,62] TalkingData AdTracking: [113,67] Tennessee eastman: [16,28] Texas coast: [27] Thyroid: [132] UBA: [96] UCI: [38,126] UCSD: [21,22,37,39,41,54,64,65,122,74,75,79,90,107,109] Udacity: [56,61] UMN: [39,43,64,65,74,90,107] UNSW-NB15: [110] VIRAT: [81] WADI test-bed: [93] WOA13 month...…”
Section: Rq4: Which Type Of Data Instance and Datasets Are Most Commo...mentioning
confidence: 99%
“…As can be seen, up to 25 joints are described for a skeleton, where a joint is defined by its (x, y) coordinates inside the RGB frame. The identified joints correspond to-nose (0), neck (1), right/left shoulder, elbow, wrist (2-7), hips middle point ( 8), right/left hip, knee, ankle, eye, ear, big toe, small toes, and heel (9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24). While joint positions alone do not provide useful information, due to their strict correlation to the video they are extracted from, they can still be used to generate a detailed description of body movements, via the feature extraction module.…”
Section: Skeleton Joint Generationmentioning
confidence: 99%
“…For example, algorithms for separating background from foreground, for example, References [ 20 , 21 , 22 ], are often used, as pre-processing stage, both to detect the objects of interest in the scene and to have a reference model of the background and its variations over time. Another example are the tracking algorithms, for example, Reference [ 10 , 23 ], which are used to analyze moving objects. Among these collaborative algorithms, person re-identification ones, for example, References [ 24 , 25 ], play a key role, especially in security, protection, and prevention areas.…”
Section: Introductionmentioning
confidence: 99%
“…Ammar [13] developed Deep Detector Classifier (DeepDC) for moving objects segmentation and classification in video surveillance. The developed DeepDC employed and validated Deep Sphere, identified the anomalous cases in spatial and temporal context performed foreground objects segmentation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The object detection is considered as challenging step in the video analytics and it is an important process to analyze, track, and match the objects present in the videos [6-International Journal of Intelligent Engineering and Systems, Vol. 13 8]. In this study, the DOS scheme is used to segment the object in video and this scheme does not assume any prior knowledge about location and number of objects.…”
Section: Introductionmentioning
confidence: 99%