2021
DOI: 10.3390/s21020437
|View full text |Cite
|
Sign up to set email alerts
|

Weakly-Supervised Recommended Traversable Area Segmentation Using Automatically Labeled Images for Autonomous Driving in Pedestrian Environment with No Edges

Abstract: Detection of traversable areas is essential to navigation of autonomous personal mobility systems in unknown pedestrian environments. However, traffic rules may recommend or require driving in specified areas, such as sidewalks, in environments where roadways and sidewalks coexist. Therefore, it is necessary for such autonomous mobility systems to estimate the areas that are mechanically traversable and recommended by traffic rules and to navigate based on this estimation. In this paper, we propose a method fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Zürn et al [13] propose a self-supervised labeling scheme based on an unsupervised audio clustering approach, where the cluster indices serve as weak labels and are projected into the robot camera images. Most recently, Onozuka et al [11] propose a traversable area segmentation approach for personal mobility systems such as intelligent wheelchairs.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Zürn et al [13] propose a self-supervised labeling scheme based on an unsupervised audio clustering approach, where the cluster indices serve as weak labels and are projected into the robot camera images. Most recently, Onozuka et al [11] propose a traversable area segmentation approach for personal mobility systems such as intelligent wheelchairs.…”
Section: Related Workmentioning
confidence: 99%
“…Automatic annotation of images offers a promising alternative to manual annotations made by human annotators. Previously proposed automatic annotation approaches typically leverage the ego-motion of a data collection platform to obtain spatially sparse image-level labels of traversable ground surfaces [10,11] or are based on proprioceptive sensors such as sound and vibration [12,13,14]. In contrast to existing work, we additionally leverage the trajectories of other traffic participants such as vehicles and pedestrians, and project them into the camera images.…”
Section: Introductionmentioning
confidence: 99%
“…The method was validated using an ANYmal quadruped robot in unstructured environment including terrain types such as asphalt, dirt, sand, grass, and also considering different weather and light conditions. From a more human-driven knowledge viewpoint, work in Onozuka et al [3] proposed an automatic labelling system based on human-driven knowledge. A twostep approach was followed i.e.…”
Section: Introductionmentioning
confidence: 99%