The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636289
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR-based Drivable Region Detection for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Two fusion strategies could be adopted here: the first one is a Kalman filter based approach that we proposed in our previous work (Xue et al, 2021). In this approach, the observed mean elevation μit ${<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\mu </mpadded>}_{i}^{t}$ is used to recursively update the joint elevation distribution MJX-tex-caligraphicscriptNˆ(trueμˆit,trueΣˆit) $\hat{{\mathscr{N}}}({<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{\mu }</mpadded>}_{i}^{t},{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{{\rm{\Sigma }}}</mpadded>}_{i}^{t})$: leftfalse0.4emleftμ¯itleft=leftaμˆit1,leftnormalΣ¯itleft=lefta2normalΣˆit1+ε,leftKleft=leftnormalΣ¯itcc2normalΣ¯it+ξ,leftμˆitleft=leftμ¯it+K(μitctrueμ¯it),leftnormalΣˆitleft=leftfalse(1Kcfalse)normalΣ¯it, $\left\{\begin{array}{lll}{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\bar{\mu }</mpadded>}_{i}^{t} & = & a{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{\mu }</mpadded>}_{i}^{t-1},\\ {<mpadded xmlns="http://www.w3.org/1998/Ma...…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Two fusion strategies could be adopted here: the first one is a Kalman filter based approach that we proposed in our previous work (Xue et al, 2021). In this approach, the observed mean elevation μit ${<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\mu </mpadded>}_{i}^{t}$ is used to recursively update the joint elevation distribution MJX-tex-caligraphicscriptNˆ(trueμˆit,trueΣˆit) $\hat{{\mathscr{N}}}({<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{\mu }</mpadded>}_{i}^{t},{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{{\rm{\Sigma }}}</mpadded>}_{i}^{t})$: leftfalse0.4emleftμ¯itleft=leftaμˆit1,leftnormalΣ¯itleft=lefta2normalΣˆit1+ε,leftKleft=leftnormalΣ¯itcc2normalΣ¯it+ξ,leftμˆitleft=leftμ¯it+K(μitctrueμ¯it),leftnormalΣˆitleft=leftfalse(1Kcfalse)normalΣ¯it, $\left\{\begin{array}{lll}{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\bar{\mu }</mpadded>}_{i}^{t} & = & a{<mpadded xmlns="http://www.w3.org/1998/Math/MathML">\hat{\mu }</mpadded>}_{i}^{t-1},\\ {<mpadded xmlns="http://www.w3.org/1998/Ma...…”
Section: Methodsmentioning
confidence: 99%
“…Two fusion strategies could be adopted here: the first one is a Kalman filter based approach that we proposed in our previous work (Xue et al, 2021). In this approach, the observed mean elevation μ i t is used to recursively update the joint elevation…”
Section: Elevation Estimation Based On Multiframe Information Fusionmentioning
confidence: 99%
“…LiDAR sensors can assist in detecting the road and the drivable area, where high-level algorithms are able to accurately identify road boundaries, markings, lanes, and curbs, aiding in a correct evaluation of the road and ensuring efficient navigation of the vehicle [17][18][19]. To better perform these tasks, a ground segmentation step can be applied to the point cloud data [20], which enhances the subsequent identification of environmental features.…”
Section: Drivable Area Detectionmentioning
confidence: 99%
“…For machine learning algorithms, features such as the RGB color, Walsh Hadamard, Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP), Haar, and LUV channel, can be extracted by the feature extractors and the classification header, such as Support Vector Machine (SVM), Conditional Random Field (CRF), to obtain the final results. The deep neural network can replace the feature extractors and some improvements, such as employing the large visual regions convolutional kernels [44], connection by multiple layers [45], to achieve competitive performance. We found that learning-based driving region detection results are usually one of the branches of the scene understanding task and researchers attempt to tackle a few challenges including 2D-3D transformation, complex driving regions, etc.…”
Section: B Object Detectionmentioning
confidence: 99%