2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8917524
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Feature Generation using Deep Neural Networks and its Application to Lane Change Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…With the increasing attention on trustworthiness and transparency of machine learning models and systems, we are seeing a focus in recent literature on explainable models for the lane change prediction task, to be able to explain the reasons behind a predicted vehicle lane change for a human user or road stakeholder. The main goal is to move away from predicting with black-box models and aiming to increase the performance of white-box models like expert systems and other explainable classifiers [10,11]. One paper by Dank et al reformulated the prediction task based on tabular data to a regression problem [8].…”
Section: Related Workmentioning
confidence: 99%
“…With the increasing attention on trustworthiness and transparency of machine learning models and systems, we are seeing a focus in recent literature on explainable models for the lane change prediction task, to be able to explain the reasons behind a predicted vehicle lane change for a human user or road stakeholder. The main goal is to move away from predicting with black-box models and aiming to increase the performance of white-box models like expert systems and other explainable classifiers [10,11]. One paper by Dank et al reformulated the prediction task based on tabular data to a regression problem [8].…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, [23] also used an attention model to visualize the perception of deep networks for autonomous driving. Saliency has also been employed to explain AI models for navigation [24], lane change detection [25], and driving behavior reasoning (e.g. hazard stop or red light stop) [26].…”
Section: Xai For Autonomous Drivingmentioning
confidence: 99%
“…XAI has been receiving growing attention in autonomous driving. Attempts have been made to explain the functions of various AI models for autonomous driving [21,22,23,24,25,26]. Yet, studies on XAI for AI-powered accident anticipation do not catch the accelerating pace of accident anticipation research.…”
Section: Introductionmentioning
confidence: 99%
“…Another advantage is the ability of recording various objects with a single device. Some relevant applications of aerial imagery for automotive data acquisition include the traffic flow analysis [4], the training of machine learning techniques [5] and the validation of autonomous vehicles [6].…”
Section: Introductionmentioning
confidence: 99%