Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019 International Conference on 3D Vision (3DV) 2019
DOI: 10.1109/3dv.2019.00019
|View full text |Cite
|
Sign up to set email alerts
|

IoU Loss for 2D/3D Object Detection

Abstract: In 2D/3D object detection task, Intersection-over-Union (IoU) has been widely employed as an evaluation metric to evaluate the performance of different detectors in the testing stage. However, during the training stage, the common distance loss (e.g., L 1 or L 2 ) is often adopted as the loss function to minimize the discrepency between the predicted and ground truth Bounding Box (Bbox). To eliminate the performance gap between training and testing, the IoU loss has been introduced for 2D object detection in [… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
103
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 331 publications
(127 citation statements)
references
References 31 publications
0
103
0
1
Order By: Relevance
“…In order to evaluate the accuracy of our clipping node we used Jaccards index [60][61][62] to compare the quality of our three-dimentional bounding boxes, which is widely adopted as a metric to compare bounding boxes. Our results (seen in Figure 4) indicate that for most of our anchors but one our I ∩ U ≈ 80%, with overall accuracy being 79.07%, with some clipping error was able to be improved by slightly expanding the bounding boxes thus potentially improving bounding boxes which were very close to ground truth.…”
Section: Clipping Resultsmentioning
confidence: 99%
“…In order to evaluate the accuracy of our clipping node we used Jaccards index [60][61][62] to compare the quality of our three-dimentional bounding boxes, which is widely adopted as a metric to compare bounding boxes. Our results (seen in Figure 4) indicate that for most of our anchors but one our I ∩ U ≈ 80%, with overall accuracy being 79.07%, with some clipping error was able to be improved by slightly expanding the bounding boxes thus potentially improving bounding boxes which were very close to ground truth.…”
Section: Clipping Resultsmentioning
confidence: 99%
“…To jointly train the regression branches, we introduce the IoU layer in the detector and calculate the auxiliary loss. After decoding the bounding box (x, y, z, w, l, h, θ) of the target from the regression branches, the network applies Equation ( 8) to measure the 3D IoU [37]:…”
Section: Auxiliary Loss and Joint Trainingmentioning
confidence: 99%
“…They are not absolutely equivalent. To compensate for the gap between the smooth L 1 loss and IoU, Zhou et al proposed IoU loss [19], which regresses the offset between the location of the object's center and the center of an anchor box and then uses the width and height of the anchor box to predict the relative scale of the predicted object boxes. Apart from IoU loss, Generalized-IoU (GIoU) loss [2], Distance-IoU (DIoU) loss [20], and Complete-IoU (CIoU) loss [20] were proposed, which can be seen as extending IoU loss.…”
Section: Related Workmentioning
confidence: 99%