2021
DOI: 10.1109/access.2021.3108398
|View full text |Cite
|
Sign up to set email alerts
|

Research on Object Detection Method Based on FF-YOLO for Complex Scenes

Abstract: YOLO v3 has poor accuracy in target location recognition, and the detection effect needs to be improved in complex scenes with dense target distribution and large size differences. To solve this problem, an improved multi-scale target detection algorithm based on feature fusion (FF-YOLO) is proposed in this paper. Firstly, the residual structure in Darknet53 backbone of YOLO v3 is replaced by the optimized dense connection network FCN-DenseNet, and features are extracted effectively through feature reuse, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…In order to verify the effectiveness of deconvolution cascade model (hereinafter referred to as DC module) and GA-RPN in UAV aerial target detection algorithms in this algorithm, a multi-scale comparison scheme was designed based on VisDrone target detection data, as shown in Table 3.2 and Figure 3.1. The test experiment was conducted in the VisDrone test dev dataset (1610 UAV aerial images, including various situations in the VisDrone dataset), set Faster Rcnn (Resnet50+RPN) as the baseline comparison network, and conduct quantitative analysis using the evaluation indicators of mean precision (mAP) and average precision (AP, including APs with IOUs of 0.50 and 0.75, recorded as AP 50 and AP 75 ) [1].…”
Section: Algorithm Feasibility Verification Analysismentioning
confidence: 99%
“…In order to verify the effectiveness of deconvolution cascade model (hereinafter referred to as DC module) and GA-RPN in UAV aerial target detection algorithms in this algorithm, a multi-scale comparison scheme was designed based on VisDrone target detection data, as shown in Table 3.2 and Figure 3.1. The test experiment was conducted in the VisDrone test dev dataset (1610 UAV aerial images, including various situations in the VisDrone dataset), set Faster Rcnn (Resnet50+RPN) as the baseline comparison network, and conduct quantitative analysis using the evaluation indicators of mean precision (mAP) and average precision (AP, including APs with IOUs of 0.50 and 0.75, recorded as AP 50 and AP 75 ) [1].…”
Section: Algorithm Feasibility Verification Analysismentioning
confidence: 99%
“…The BCEWithLogits loss function is utilized in YOLOV5 for both object loss and classification loss, and CIoU is used as bounding box loss. The bounding box loss is primarily used to locate the prediction target in the image [31]. The traditional IoU loss [32] only works when the bounding boxes intersect.…”
Section: E Improved Loss Functionmentioning
confidence: 99%
“…Tsung et al [14][15] introduces the Common Objects in Context (COCO) dataset, a widely used benchmark for object detection and image segmentation. It's essential for evaluating and benchmarking object recognition models.…”
Section: Literature Surveymentioning
confidence: 99%