The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.1007/s10846-017-0670-y
|View full text |Cite
|
Sign up to set email alerts
|

Realtime Edge Based Visual Inertial Odometry for MAV Teleoperation in Indoor Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 34 publications
0
10
0
1
Order By: Relevance
“…The authors extended it to utilize the IMU [14], although, currently, the extended system is not available open source. e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
See 2 more Smart Citations
“…The authors extended it to utilize the IMU [14], although, currently, the extended system is not available open source. e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
“…e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
See 1 more Smart Citation
“…Este factor se puede calcular si se conoce la profundidad de dicho punto. En la literatura se han propuesto varias aproximaciones para estimar este factor de escala: En (Tarrio, 2017) utilizan una cámara como sensor principal y una unidad de medición inercial para determinar la escala.…”
Section: Sinunclassified
“…This scale factor must be obtained by using any bootstrap method. In the literature, several approaches have been proposed to estimate this scale factor: in reference [18], the authors use a camera as the main sensor and an inertial measurement unit (IMU) to determine the scale. In [19], the depth is estimated by using a convolutional neural network, this estimation is refined, and the error is reduced by training the network with consecutive images.…”
Section: Introductionmentioning
confidence: 99%