2015
DOI: 10.1177/0278364915602544
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous adaptive conditioning for visual–inertial SLAM

Abstract: This paper is concerned with real-time monocular visual-inertial simultaneous localization and mapping (SLAM). In particular a tightly coupled nonlinear-optimization-based solution that can match the global optimal result in real time is proposed. The methodology is motivated by the requirement to produce a scale-correct visual map, in an optimization framework that is able to incorporate relocalization and loop closure constraints. Special attention is paid to achieve robustness to many real world difficultie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(23 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…Moreover, Euler angles are known to have singularities. Our theoretical derivation in Section V also advances previous works [10,12,13,25] that used preintegrated measurements but did not develop the corresponding theory for uncertainty propagation and a-posteriori bias correction. Besides these improvements, our model still benefits from the pioneering insight of [26]: the integration is performed in a local frame, which does not require to repeat the integration when the linearization point changes.…”
Section: Introductionmentioning
confidence: 73%
See 2 more Smart Citations
“…Moreover, Euler angles are known to have singularities. Our theoretical derivation in Section V also advances previous works [10,12,13,25] that used preintegrated measurements but did not develop the corresponding theory for uncertainty propagation and a-posteriori bias correction. Besides these improvements, our model still benefits from the pioneering insight of [26]: the integration is performed in a local frame, which does not require to repeat the integration when the linearization point changes.…”
Section: Introductionmentioning
confidence: 73%
“…Batch non-linear optimization, which has become popular for visual-inertial fusion [3][4][5][6][7][8][9][10][11][12][13][14][15], allows This research was partially funded by the Swiss National Foundation (project number 200021-143607, "Swarm of Flying Cameras"), the National Center of Competence in Research Robotics (NCCR), the UZH Forschungskredit, the NSF Award 11115678, and the USDA NIFA Award GEOW-2014-09160. The 160m-long trajectory starts at (0, 0, 0) (ground floor), goes up till the 3rd floor of a building, and returns to the initial point.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Fusion Type Application OKVIS [43][44][45] optimization-based monocular tightly coupled SR-ISWF [46] filtering-based monocular tightly coupled mobile phone [47] optimization-based monocular tightly coupled [48] optimization-based Stereo tightly coupled MAV [49] optimization-based rgb-d loosely coupled Mobile devices [50] filtering-based monocular tightly coupled ROVIO [51] filtering-based monocular tightly coupled UAV [52] optimization-based monocular tightly coupled autonomous vehicle [53] filtering-based stereo tightly coupled [54] optimization-based stereo tightly coupled [55] optimization-based monocular tightly coupled [56] optimization-based stereo tightly coupled [57] filtering-based monocular loosely coupled robot [58] optimization-based rgb-d loosely coupled [59] filtering-based stereo loosely coupled VIORB [60] optimization-based monocular tightly coupled MAV [61] optimization-based rgb-d tightly coupled [62] filtering-based monocular loosely coupled AR/VR [63] filtering-based Multi-camera tightly coupled MAV [64] filtering-based monocular tightly coupled UAV VINS-mono [16][17][18] optimization-based monocular tightly coupled MAV, AR [65] optimization-based monocular tightly coupled AR [66] optimization-based monocular tightly coupled [67] filtering-based monocular tightly coupled MAV VINet [68] end-to-end monocular / deep-learning [69] optimization-based event camera tightly coupled S-MSCKF [26] filtering-based stereo tightly coupled MAV [70] optimization-based monocular tightly coupled MAV [71] optimization-based stereomonocular tightly coupled PIRVS [72] filtering-based st...…”
Section: Year Paper Back-end Approach Camera Typementioning
confidence: 99%
“…The visual inertial odometry (VIO) literature is vast, including approaches based on filtering [14][15][16][17][18][19], fixed-lag smoothing [20][21][22][23][24], full smoothing [25][26][27][28][29][30][31][32]. The algorithms considered here are related to IMU preintegration models [30][31][32][33].…”
Section: Introductionmentioning
confidence: 99%