2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01293
|View full text |Cite
|
Sign up to set email alerts
|

MEDIRL: Predicting the Visual Attention of Drivers via Maximum Entropy Deep Inverse Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 56 publications
0
16
0
Order By: Relevance
“…Four large-scale publicly available datasets have been popularly used in modeling driver attention: DR(eye)VE [31], BDD-Attention [43], DADA-2000 [7], and EyeCar [3]. With 6 hours of eye-tracking data, the DR(eye)VE dataset is the only dataset collected in-car and also provides distraction-related annotations for 20% of its frames [31].…”
Section: Driver Attention Datasetsmentioning
confidence: 99%
See 4 more Smart Citations
“…Four large-scale publicly available datasets have been popularly used in modeling driver attention: DR(eye)VE [31], BDD-Attention [43], DADA-2000 [7], and EyeCar [3]. With 6 hours of eye-tracking data, the DR(eye)VE dataset is the only dataset collected in-car and also provides distraction-related annotations for 20% of its frames [31].…”
Section: Driver Attention Datasetsmentioning
confidence: 99%
“…The attention data in the BDD-A dataset was collected in an in-lab setup, in which participants were asked to imagine themselves as the driver in a driver-perspective video. This collection protocol was later reused by the DADA-2000 and EyeCar project with a focus on traffic accident scenarios [3,7]. Although this collection protocol is cost-effective, the collected gaze data is less credible as the participants were not the actual drivers in the setup.…”
Section: Driver Attention Datasetsmentioning
confidence: 99%
See 3 more Smart Citations