2019
DOI: 10.1108/el-09-2018-0191
|View full text |Cite
|
Sign up to set email alerts
|

Accelerate proposal generation in R-CNN methods for fast pedestrian extraction

Abstract: Purpose The purpose of this study is to develop a novel region-based convolutional neural networks (R-CNN) approach that is more efficient while at least as accurate as existing R-CNN methods. In this way, the proposed method, namely R2-CNN, provides a more powerful tool for pedestrian extraction for person re-identification, which involve a huge number of images and pedestrian needs to be extracted efficiently to meet the real-time requirement. Design/methodology/approach The proposed R2-CNN is tested on tw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(15 citation statements)
references
References 25 publications
0
15
0
Order By: Relevance
“…As shown in Table 12, it is the most advanced performance for object detection using enhanced YOLOv5. When compared to the current methods, YOLOv4 [52], SAF R-CNN [61], MNPrioriBoxes-Yolo [35], and AEMS-RPN [33], the proposed methodology performs better. Second, the proposed model's miss rate for the ETH dataset is 32.63% and mAP of 78.3% compared with 33.87% of [35] and 36.46% of [33].…”
Section: Caltechmentioning
confidence: 89%
See 3 more Smart Citations
“…As shown in Table 12, it is the most advanced performance for object detection using enhanced YOLOv5. When compared to the current methods, YOLOv4 [52], SAF R-CNN [61], MNPrioriBoxes-Yolo [35], and AEMS-RPN [33], the proposed methodology performs better. Second, the proposed model's miss rate for the ETH dataset is 32.63% and mAP of 78.3% compared with 33.87% of [35] and 36.46% of [33].…”
Section: Caltechmentioning
confidence: 89%
“…Fig. 12(c) compares the enhanced YOLOv5's pedestrian detection performance to that of the following methods: YOLOv4 [52], SAF R-CNN [61], MNPrioriBoxes-Yolo [35], and AEMS-RPN [33]. On the KITTI dataset, the proposed method yields encouraging results with 79%, 68%, and 64% and mAP of 78.34%.…”
Section: Kittimentioning
confidence: 97%
See 2 more Smart Citations
“…R-CNN has been proposed by Ross Girshick et al [28] where a selective search algorithm is used for finding the Region of Interest (RoI) in an image, called region proposals. Selective search algorithm extract 2000 regions from the image.…”
mentioning
confidence: 99%