2022
DOI: 10.1007/s11633-022-1339-y
|View full text |Cite|
|
Sign up to set email alerts
|

YOLOP: You Only Look Once for Panoptic Driving Perception

Abstract: A panoptic driving perception system is an essential part of autonomous driving. A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving. We present a panoptic driving perception network (you only look once for panoptic (YOLOP)) to perform traffic object detection, drivable area segmentation, and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. Our model performs extr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 179 publications
(83 citation statements)
references
References 26 publications
0
54
0
Order By: Relevance
“…based on YOLO-v1, the overall performance is still not satisfactory. YOLO-9000 [17], , YOLO-v4 [19], YOLO-v5 [20], YOLOX [21], YOLOP [22], YOLO-v7 [23], YOLO-v5 achieves a good balance between accuracy and real-time performance, and it is widely used in industry, so there are a lot of available interfaces, which are convenient for maintenance and upgrades. Therefore, this paper chooses to improve based on the YOLO-v5 algorithm to realize object detection task for tra c participants of unmanned sweeper.…”
Section: Related Workmentioning
confidence: 99%
“…based on YOLO-v1, the overall performance is still not satisfactory. YOLO-9000 [17], , YOLO-v4 [19], YOLO-v5 [20], YOLOX [21], YOLOP [22], YOLO-v7 [23], YOLO-v5 achieves a good balance between accuracy and real-time performance, and it is widely used in industry, so there are a lot of available interfaces, which are convenient for maintenance and upgrades. Therefore, this paper chooses to improve based on the YOLO-v5 algorithm to realize object detection task for tra c participants of unmanned sweeper.…”
Section: Related Workmentioning
confidence: 99%
“…Hu et al [26] jointly learned multiple tasks across different domains with a unified Transformer. Wu et al [21] introduced a multi-task network that can jointly handle object detection, drivable area segmentation, and lane detection in autonomous driving.…”
Section: Multi-task Approachesmentioning
confidence: 99%
“…This includes adversarial training of domain-invariant features [24], learning weather-specific priors [25], multi-scale feature learning per domain [26], image-level feature alignment for single stage detectors [27], image enhancement before object detection [28], [29] which includes approaches specific to hazy [30]- [32] and low light conditions [2], [33] as well as using multiple differentiable image pre-processing units in sequence [34] or parallel [35]. These research efforts have been further fostered by the availability of annotated datasets for adverse conditions (e.g., BDD100K [36]), leading to robust supervised methods such as YOLOP [37]. Despite training on adverse conditions, this problem still remains challenging.…”
Section: B 2d Object Detectionmentioning
confidence: 99%
“…We propose to use an off-the-shelf object detection method to extract a large set of potential candidate vehicles in the images observed by the autonomous vehicle. In our experiments, we use the recent vehicle detection network YOLOP [37]. YOLOP is a variant of YOLO that has been trained on BDD100K [36] to perform the tasks of vehicle detection, lane line segmentation and driveable area segmentation.…”
Section: B Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation