2020
DOI: 10.1016/j.asoc.2020.106153
|View full text |Cite
|
Sign up to set email alerts
|

A novel vSLAM framework with unsupervised semantic segmentation based on adversarial transfer learning

Abstract: Significant progress has been made in the field of visual Simultaneous Localization and Mapping (vSLAM) systems. However, the localization accuracy of vSLAM can be significantly reduced in dynamic applications with mobile robots or passengers. In this paper, a novel semantic SLAM framework in dynamic environments is proposed to improve the localization accuracy. We incorporate a semantic segmentation model into the Oriented FAST and Rotated BRIEF-SLAM2 (ORB-SLAM2) system to filter out dynamic feature points, b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…Recently, three ways to develop deep learning-based VSLAM software components encompassing auxiliary modules, original deep learning modules, and end-to-end deep neural networks have been identified with different degrees of implementation. A way to develop auxiliary deep-based modules introduces most of the published studies including feature extraction [48][49][50], semantic segmentation [51][52][53][54][55][56][57][58][59][60], pose estimation [8,45,46,[61][62][63], map construction [3,[64][65][66], and loop closure [67][68][69][70]. It should be noted that deep neural networks extract low-level features from images by converting them to high-level featureslayer by layer.…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, three ways to develop deep learning-based VSLAM software components encompassing auxiliary modules, original deep learning modules, and end-to-end deep neural networks have been identified with different degrees of implementation. A way to develop auxiliary deep-based modules introduces most of the published studies including feature extraction [48][49][50], semantic segmentation [51][52][53][54][55][56][57][58][59][60], pose estimation [8,45,46,[61][62][63], map construction [3,[64][65][66], and loop closure [67][68][69][70]. It should be noted that deep neural networks extract low-level features from images by converting them to high-level featureslayer by layer.…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
“…Jin et al [57] proposed a semantic SLAM framework with Unsupervised Semantic Segmentation (USS-SLAM) in dynamic environments. The USS-SLAM framework ran four threads in parallel: tracking thread, local mapping thread, loop closing thread, and semantic map generation thread.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…In the research field of intelligent robotics and autonomous navigation systems, Simultaneous Localization and Mapping (SLAM) technology plays a crucial role [ 1 , 2 ]. Notably, the work of M.W.…”
Section: Introductionmentioning
confidence: 99%
“…However, the actual scene inevitably contains moving objects or objects with moving properties. With the development of deep learning (DL), semantic SLAM for dynamic scenes has been widely studied in the literature [4][5][6][7], which mainly uses DL methods to detect moving objects in highly dynamic environments. However, most existing semantic SLAM methods for dynamic scenes still suffer from real-time performance issues.…”
Section: Introductionmentioning
confidence: 99%
“…However, most DL models used in the existing semantic SLAM methods require high memory consumption and computational cost. For example, the Mask R-CNN model [8] used in DynaSLAM [4] and OFM-SLAM [5], the Deeplab-V2 model [9] used in USS-SLAM [6], and the PSPNet-50 model [10] used in PSPNet-SLAM [7]. Some semantic SLAM methods [11][12][13] choose lightweight DL models, such as the SegNet model [14].…”
Section: Introductionmentioning
confidence: 99%