2019
DOI: 10.4218/etrij.2018-0520
|View full text |Cite
|
Sign up to set email alerts
|

Vision‐based garbage dumping action detection for real‐world surveillance platform

Abstract: In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real‐world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real‐world scenarios because they are mainly focused on well‐refined datasets. Because the dumping actions in the real‐world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
18
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(20 citation statements)
references
References 44 publications
(61 reference statements)
1
18
0
1
Order By: Relevance
“…The experimental results verify the superior performance of the proposed method over existing state-of-the-art methods [20], [21]. Further, we also demonstrate qualitatively the improved results of the proposed method on several datasets [25]- [27] that contain images of people in reclining postures because of the angle of the pre-installed camera.…”
Section: Introductionsupporting
confidence: 67%
See 3 more Smart Citations
“…The experimental results verify the superior performance of the proposed method over existing state-of-the-art methods [20], [21]. Further, we also demonstrate qualitatively the improved results of the proposed method on several datasets [25]- [27] that contain images of people in reclining postures because of the angle of the pre-installed camera.…”
Section: Introductionsupporting
confidence: 67%
“…Figures 14 and 15 qualitatively visualize the human pose estimation results of the baseline method (HRNet-W32 trained on COCO dataset) and the proposed method on the surveillance action dataset [25], [26]. Figure 14 presents the results of human pose estimation in the case in which the image is rotated according to the evaluation protocol on the garbage dumping action dataset [25], [33]: The columns records the results for the image rotated by 60, 120, 180, and 270 degrees. As depicted in Figure 14, rotational robustness is improved on the actual surveillance dataset as well as the MPII and COCO datasets used in the quantitative evaluation.…”
Section: Qualitative Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Human–object interaction (HOI) detection aims to detect human and object locations and classify their interactions at the instance level (eg, a person riding a bike, carrying a backpack, and throwing a frisbee), which can be formulated as detecting a triplet (human, action, and object) . This task is beneficial to many applications that require a deeper understanding of semantic scenes, such as video surveillance and visual question answering .…”
Section: Introductionmentioning
confidence: 99%