2021
DOI: 10.1155/2021/3163470
|View full text |Cite
|
Sign up to set email alerts
|

Learning Deformable Network for 3D Object Detection on Point Clouds

Abstract: 3D object detection based on point cloud data in the unmanned driving scene has always been a research hotspot in unmanned driving sensing technology. With the development and maturity of deep neural networks technology, the method of using neural network to detect three-dimensional object target begins to show great advantages. The experimental results show that the mismatch between anchor and training samples would affect the detection accuracy, but it has not been well solved. The contributions of this pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…Reference [27] proposed a target detection method based on improved GUPNET. Te experimental results show that the average accuracy of the proposed method is the highest, with an average accuracy of 0.919 in simple difcult scenes, 0.897 in medium difcult scenes, and 0.839 in difcult scenes; reference [27] has an average accuracy of 0.900 in simple difcult scenes, 0.856 in medium difcult scenes, and 0.802 in difcult scenes; the average accuracy of reference [26] is the lowest, with the average accuracy of only 0.896 in simple difcult scenes, 0.879 in medium difcult scenes, and 0.783 in difcult scenes. Tis is because the proposed method divides the process of power operation violation recognition into two stages.…”
Section: Performance Comparison With Other Methodsmentioning
confidence: 98%
See 4 more Smart Citations
“…Reference [27] proposed a target detection method based on improved GUPNET. Te experimental results show that the average accuracy of the proposed method is the highest, with an average accuracy of 0.919 in simple difcult scenes, 0.897 in medium difcult scenes, and 0.839 in difcult scenes; reference [27] has an average accuracy of 0.900 in simple difcult scenes, 0.856 in medium difcult scenes, and 0.802 in difcult scenes; the average accuracy of reference [26] is the lowest, with the average accuracy of only 0.896 in simple difcult scenes, 0.879 in medium difcult scenes, and 0.783 in difcult scenes. Tis is because the proposed method divides the process of power operation violation recognition into two stages.…”
Section: Performance Comparison With Other Methodsmentioning
confidence: 98%
“…In order to prove the performance of the proposed method, it is compared with the methods in reference [26] and reference [27] under the same experimental conditions. Te comparison results are shown in Table 2.…”
Section: Performance Comparison With Other Methodsmentioning
confidence: 99%
See 3 more Smart Citations