2023
DOI: 10.3390/app13031746
|View full text |Cite
|
Sign up to set email alerts
|

A Lightweight YOLOv5 Optimization of Coordinate Attention

Abstract: As Machine Learning technologies evolve, there is a desire to add vision capabilities to all devices within the IoT in order to enable a wider range of artificial intelligence. However, for most mobile devices, their computing power and storage space are affected by factors such as cost and the tight supply of relevant chips, making it impossible to effectively deploy complex network models to small processors with limited resources and to perform efficient real-time detection. In this paper, YOLOv5 is studied… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…It is one of the advanced papers on road defect detection in recent years. The experiment was carried out under the same experimental conditions as the YOLOv5 model improved by Guo 50 and the YOLOv7 model improved by Pham V 51 . Compared to the aforementioned algorithms, YOLOv8-PD achieves the best performance in terms of mAP50, mAP50:95, and F1-Score.…”
Section: Resultsmentioning
confidence: 99%
“…It is one of the advanced papers on road defect detection in recent years. The experiment was carried out under the same experimental conditions as the YOLOv5 model improved by Guo 50 and the YOLOv7 model improved by Pham V 51 . Compared to the aforementioned algorithms, YOLOv8-PD achieves the best performance in terms of mAP50, mAP50:95, and F1-Score.…”
Section: Resultsmentioning
confidence: 99%
“…The experiment also utilized other attention mechanism modules, such as EMA attention 44 and CA attention 45…”
Section: Comparison Experiments On Attention Mechanismsmentioning
confidence: 99%
“…By placing the attention module before the SPPF, the refined feature representation from the attention module can be efficiently combined with the spatially pooled features. This strategic placement facilitates enhanced network performance by enabling selective focus on the most relevant features [37]. Furthermore, including ECA-Net after every C3 block in the downsampling process can improve the network's performance by augmenting its capacity to capture long-range interdependencies among distinct spatial locations within the feature map [38].…”
Section: Enhancing Object Detection With Attention Mechanismmentioning
confidence: 99%