2023
DOI: 10.3390/s23031347
|View full text |Cite
|
Sign up to set email alerts
|

IDOD-YOLOV7: Image-Dehazing YOLOV7 for Object Detection in Low-Light Foggy Traffic Environments

Abstract: Convolutional neural network (CNN)-based autonomous driving object detection algorithms have excellent detection results on conventional datasets, but the detector performance can be severely degraded in low-light foggy weather environments. Existing methods have difficulty in achieving a balance between low-light image enhancement and object detection. To alleviate this problem, this paper proposes a foggy traffic environment object detection framework, IDOD-YOLOV7. This network is based on joint optimal lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(33 citation statements)
references
References 52 publications
0
33
0
Order By: Relevance
“…McCartney introduced an atmospheric scattering model [5] to elucidate the formation process of fog images as shown in Figure 2. In accordance with this model, fog is formed due to the absorption and scattering of natural light caused by the high density of water vapor and minute suspended particles in the atmosphere [13] . As the particles disperse the light, the transmission between the object and the sensor results in a decrease in light intensity, and an extra layer of scattered atmospheric light is added.…”
Section: Dcp Modelmentioning
confidence: 85%
“…McCartney introduced an atmospheric scattering model [5] to elucidate the formation process of fog images as shown in Figure 2. In accordance with this model, fog is formed due to the absorption and scattering of natural light caused by the high density of water vapor and minute suspended particles in the atmosphere [13] . As the particles disperse the light, the transmission between the object and the sensor results in a decrease in light intensity, and an extra layer of scattered atmospheric light is added.…”
Section: Dcp Modelmentioning
confidence: 85%
“…Following the current metric for object detection (Lin et al, 2022), we primarily evaluate the detection mean Average Precision (mAP), frames per second (FPS). In this paper, we compare SM-CODN with other state-of-the-art object detection methods, including SSD (Liu et al, 2016), DETR (Carion et al, 2020), YOLOv7 (Wang, Bochkovskiy, et al, 2022), PPYOLOE-M (Shangliang et al, 2022), AdaMixer (Gao et al, 2022), MCS-YOLO (Shu-Jun et al, 2023), IDOD-YOLOV7 (Qiu et al, 2023), RF-Next (Gao et al, 2023). Our model is based on the advanced YOLOv7 which has a good trade-off between accuracy and inference speed.…”
Section: Methodsmentioning
confidence: 99%
“…The YOLO (You Only Look Once) architecture is a popular approach for object detection, which uses a single convolutional neural network (CNN) to predict bounding boxes and class probabilities. Several versions of YOLO have been proposed, including YOLOv2, YOLOv3, and YOLOv4 [11][14] [17]. In this paper, YOLOv7 architecture is proposed.The COCO (Common Objects in Context) dataset is a commonly used dataset for object detection, which contains over 330,000 images and more than 2.5 million object instances [18].…”
Section: Literature Reviewmentioning
confidence: 99%
“…However, they all aim to improve object detection accuracy in hazy scenes by improving visibility and restoring image quality through image dehazing. The results in the table suggest that boundary-constrained dehazing combined with Faster R-CNN [19] and YOLOv3 [14] architecture are effective approaches for improving object detection accuracy in hazy scenes, as they achieve high mAP scores and relatively high SSIM and PSNR values [11].…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation