2021
DOI: 10.48550/arxiv.2110.10364
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NOD: Taking a Closer Look at Detection under Extreme Low-Light Conditions with Night Object Detection Dataset

Abstract: Recent work indicates that, besides being a challenge in producing perceptually pleasing images, low light proves more difficult for machine cognition than previously thought. In our work, we take a closer look at object detection in low light. First, to support the development and evaluation of new methods in this domain, we present a high-quality large-scale Night Object Detection (NOD) dataset showing dynamic scenes captured on the streets at night. Next, we directly link the lighting conditions to perceptu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…The Night Object Detection Dataset (NOD database) is a valuable resource designed specifically for object detection in low-light conditions, providing high-quality and extensive data [ 19 ]. This dataset comprises over 7000 images categorized into three groups: people; bicycles; and cars.…”
Section: Methodsmentioning
confidence: 99%
“…The Night Object Detection Dataset (NOD database) is a valuable resource designed specifically for object detection in low-light conditions, providing high-quality and extensive data [ 19 ]. This dataset comprises over 7000 images categorized into three groups: people; bicycles; and cars.…”
Section: Methodsmentioning
confidence: 99%
“…Non-anchor-based algorithms discard anchors and obtain box descriptions through other methods, such as YOLOv1 (You Only Look Once Version1), CornerNet [26], ExtrmeNet [27], Fully Convolutional One Stage (FCOS), etc. YOLOv1 performs the regression of target position and category for each pixel of the feature map.…”
Section: Introductionmentioning
confidence: 99%