2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00328
|View full text |Cite
|
Sign up to set email alerts
|

Occupancy Grid Mapping with Cognitive Plausibility for Autonomous Driving Applications

Abstract: This work investigates the validity of an occupancy grid mapping inspired by human cognition and the way humans visually perceive the environment. This query is motivated by the fact that, to date, no autonomous driving system reaches the performance of an ordinary human driver. The mechanisms behind human perception could provide cues on how to improve common techniques employed in autonomous navigation-specifically the use of occupancy grids to represent the environment. We experiment with a neural network t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Therefore, there is no need anymore for the user to drive for a moment to the center to have a clearer view of the road, because the agent is the one responsible to check if the overtake is feasible. Even if the user steers to the side to see ahead, the agent will not execute the overtake if it deems the maneuver risky (note that the perception system of the agent differs from the human, so the agent does not actually need to drive to the side to see better, like a human driver would do—although there are new attempts at human-inspired perception for autonomous vehicles; Plebe et al, 2021 ). This collaboration paradigm leads to a new way of driving, and human drivers will need some time to adjust to it.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, there is no need anymore for the user to drive for a moment to the center to have a clearer view of the road, because the agent is the one responsible to check if the overtake is feasible. Even if the user steers to the side to see ahead, the agent will not execute the overtake if it deems the maneuver risky (note that the perception system of the agent differs from the human, so the agent does not actually need to drive to the side to see better, like a human driver would do—although there are new attempts at human-inspired perception for autonomous vehicles; Plebe et al, 2021 ). This collaboration paradigm leads to a new way of driving, and human drivers will need some time to adjust to it.…”
Section: Discussionmentioning
confidence: 99%
“…Instead of observation models, posterior probability can be used directly to estimate HMM parameters, which is convenient in computation in this paper. (2) The HMM parameters of observed space can be smoothed. GRFs are applied to consider the dependence between HMM parameters of different points.…”
Section: Contribution and Paper Organisationmentioning
confidence: 99%
“…In earlier research, the environments were assumed to be static. The classical method for static environments is occupancy grid mapping [1][2][3] where maps are divided into a grid and the states of different grid cells are assumed to be independent. In dynamic environments, one popular strategy is to estimate the number of potential targets, their positions, and velocities from sensor data [4][5][6].…”
Section: Introduction 1literature Reviewmentioning
confidence: 99%
“…This yields essential visual cues for scene understanding, and thus forms a significant part of environment perception. One could argue that amodal perception should be performed after fusion in the occupancy grid/vector space [10], [11], [12], or in some latent space non-interpretable to humans [13], [14]. This work on amodal perception of simple camera data, however, comes with the clear advantage of (a) being human-readable, and (b) that already during multisensor fusion, the redundancy of multiple estimates of occluded objects may yield powerful confidence information.…”
Section: Introductionmentioning
confidence: 99%