2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) 2021
DOI: 10.1109/ismar-adjunct54149.2021.00043
|View full text |Cite
|
Sign up to set email alerts
|

COVINS: Visual-Inertial SLAM for Centralized Collaboration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(28 citation statements)
references
References 19 publications
0
28
0
Order By: Relevance
“…In this work, we focus on collaborative BA (Example 1) in our experimental validation (Sec. VI), due to its fundamental role in multi-robot visual SLAM [1]- [4]. However, we note that our approach extends beyond the above examples to many other multi-agent estimation problems that can be described with a factor graph [32].…”
Section: Collaborative Geometric Estimationmentioning
confidence: 90%
See 2 more Smart Citations
“…In this work, we focus on collaborative BA (Example 1) in our experimental validation (Sec. VI), due to its fundamental role in multi-robot visual SLAM [1]- [4]. However, we note that our approach extends beyond the above examples to many other multi-agent estimation problems that can be described with a factor graph [32].…”
Section: Collaborative Geometric Estimationmentioning
confidence: 90%
“…We generate noisy inputs for each dataset by perturbing the ORB-SLAM3 estimates by zero-mean Gaussian noise. 4 We compare LARPG against two baseline methods that can be implemented under the communication architecture considered in this work. The first baseline is the method in [26] using distributed preconditioned conjugate gradient (PCG).…”
Section: B Performance On Collaborative Slam Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, some solutions deployed in large-scale environments during the DARPA Subterranean Challenge (Hudson et al, 2021;Agha et al, 2021) led to the developments of new C-SLAM systems, such as the robust lidar-based approach of (Ebadi et al, 2020). Alternatively, (Schmuck et al, 2021) proposes a vision-based centralized C-SLAM system incorporating inertial measurements, which has been tested with up to 12 robots in simulation. In another line of work, (Lajoie et al, 2020) presents a distributed and decentralized system robust to spurious measurements, along with online experiments on real robots, and a publicly available implementation.…”
Section: Complete C-slam Systemsmentioning
confidence: 99%
“…proposed a centralized multi-intelligence collaborative monocular visual-inertial SLAM deployed on multiple iOS mobile devices. COVINS(Schmuck et al, 2021) is a novel collaborative SLAM system that supports multi-agent, scalable SLAM in large environments and for large teams of more than ten agents.…”
mentioning
confidence: 99%