2022
DOI: 10.1109/jsen.2022.3169340
|View full text |Cite
|
Sign up to set email alerts
|

WF-SLAM: A Robust VSLAM for Dynamic Scenarios via Weighted Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…Jiao et al [28] combined semantic segmentation and motion information of dynamic objects, and associated feature points extracted from images with dynamic probability, achieving high adaptability in high-dynamic scenes. Zhong et al [29] developed WF-SLAM, selecting feature points and assigning weights to the reserved feature points according to the semantic mask and geometric mask.…”
Section: Methods Independent Of the Camera Motion Modelmentioning
confidence: 99%
“…Jiao et al [28] combined semantic segmentation and motion information of dynamic objects, and associated feature points extracted from images with dynamic probability, achieving high adaptability in high-dynamic scenes. Zhong et al [29] developed WF-SLAM, selecting feature points and assigning weights to the reserved feature points according to the semantic mask and geometric mask.…”
Section: Methods Independent Of the Camera Motion Modelmentioning
confidence: 99%
“…However, the computation of correlation geometric constraints is relatively complex and can lead to computational delays in scenarios that require real-time processing of large amounts of data. Zhong et al [28] introduced a robust visual SLAM system that utilizes weighted features (WF-SLAM) that uses the epipolar constraints to assign different weights to each feature point, and uses the weight information to initialize the camera pose and mitigate the effect of dynamic feature points on the camera pose. However, the polar constraint alone cannot filter all dynamic feature points, and the method fails if the dynamic object moves along the polar direction.…”
Section: Related Workmentioning
confidence: 99%
“…The matrices F and H are calculated based on the fine matching result of the current frame using the algorithms ( findFundamentalMat function for F and findHomography function for H) provided by OpenCV4. The algorithm implementation details can be found in the OpenCV4 manual 6 . For matrix H, the remapping distance d H between a pair of matching points is calculated by…”
Section: Scenario Classificationmentioning
confidence: 99%
“…To eliminate the negative impact of moving objects and obtain clean maps, a number of vSLAM systems based on moving object segmentation have been proposed, such as geometry-based vSLAM [3,4], and learning-based vSLAM [5,6]. Although the above methods achieves accurate tracking by segmenting moving objects, the improvement of tracking accuracy is limited in some scenarios (e.g.…”
Section: Introductionmentioning
confidence: 99%