2021
DOI: 10.1016/j.measurement.2021.109878
|View full text |Cite
|
Sign up to set email alerts
|

The YTU dataset and recurrent neural network based visual-inertial odometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…VINet [175] 2017 Monocular + IMU CNN + LSTM Supervised VIOLearner [197] 2020 Monocular + IMU CNN Unsupervised VIO DeepVIO [187] 2019 Stereo + IMU CNN + LSTM Supervised Chen et al [188] 2019 Monocular + IMU FlowNet + LSTM Supervised Kim et al [198] 2021 Monocular + IMU CNN + LSTM Unsupervised Gurturk et al [199] 2021 Monocular + IMU CNN + LSTM Supervised…”
Section: Methods Year Sensor Neural Network Supervisionmentioning
confidence: 99%
“…VINet [175] 2017 Monocular + IMU CNN + LSTM Supervised VIOLearner [197] 2020 Monocular + IMU CNN Unsupervised VIO DeepVIO [187] 2019 Stereo + IMU CNN + LSTM Supervised Chen et al [188] 2019 Monocular + IMU FlowNet + LSTM Supervised Kim et al [198] 2021 Monocular + IMU CNN + LSTM Unsupervised Gurturk et al [199] 2021 Monocular + IMU CNN + LSTM Supervised…”
Section: Methods Year Sensor Neural Network Supervisionmentioning
confidence: 99%
“…In these studies, methodological recommendations for SLAM were presented, and prediction errors in UAV poses were compared with previous studies. Indoor SLAM applications are often an important requirement in the robotics industry, such as search and rescue or defense in environments without GPS access, or in the entertainment industry, such as virtual reality [101,102]. However, as we know, there are no agricultural applications of such advanced SLAM methods in indoor environments such as greenhouses.…”
Section: Solution Proposal For Uav Applications In Greenhousesmentioning
confidence: 99%
“…Recent neural network-based visual-inertial fusion schemes [2]- [6] all adopt a similar "feature fusion" scheme, where a neural network maps the raw (visual and inertial) measurements to a 6-DOF egomotion prediction in an end-to-end manner. Internally, the network extracts and combines (e.g., through concatenation of the two feature vectors) sensor-specific features.…”
Section: B Learning-based Approaches To Viomentioning
confidence: 99%
“…The resulting multimodal features pass into a final network component that predicts the egomotion. This scheme exists in both supervised [2]- [4] and self-supervised [5], [6] settings. In the self-supervised setting [16], a pixel-based reconstruction loss is minimized to jointly train a depth and egomotion network.…”
Section: B Learning-based Approaches To Viomentioning
confidence: 99%
See 1 more Smart Citation