Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 2017
DOI: 10.1145/2909824.3020250
|View full text |Cite
|
Sign up to set email alerts
|

Do You Want Your Autonomous Car To Drive Like You?

Abstract: With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users' driving style. This makes the assumption that users want their autonomous cars to drive like they drive -aggressive drivers want aggressive cars, defensive drivers want defensive cars. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their ow… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
79
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(86 citation statements)
references
References 22 publications
7
79
0
Order By: Relevance
“…Recent work found that people do not actually want their autonomous cars to drive as aggressively as they do [6]. In such cases, the reward function learned by IRL will not encode the humans' desired behavior.…”
Section: Background On Reward Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent work found that people do not actually want their autonomous cars to drive as aggressively as they do [6]. In such cases, the reward function learned by IRL will not encode the humans' desired behavior.…”
Section: Background On Reward Learningmentioning
confidence: 99%
“…Note that this pair of trajectories is clearly querying the user for whether she wants the robot arm to move towards the goal or away from the goal. Additionally, note the jaggedness of the trajectory: this is due to the highly non-convex nature of the optimization problem (6). 3) RolloutDemPref.mov: This video shows a sample trajectory generated by PPO, according to the reward function learned by DemPref (from a specific user).…”
Section: Appendixmentioning
confidence: 99%
“…For many systems, end-users have difficulty providing demonstrations of what they want. For instance, they cannot coordinate 7 degrees of freedom (DOFs) at a time [2], and they can only show the car how they drive, not how they want the car to drive [5]. In such cases, another option is for the system to regress a reward function from labeled state-action pairs, but assigning precise numeric reward values to observed robot actions is also difficult.…”
Section: Introductionmentioning
confidence: 99%
“…The Social robotics field has proposed a great amount of work on human-aware navigation, or socially-aware navigation that integrates social spaces in the navigation decision process [16,17,18,19]. The question of human acceptance of autonomous cars behavior starts to be seen in the field of autonomous cars [20,21,22].…”
Section: Comfort In Autonomous Carsmentioning
confidence: 99%