2020 IEEE 29th International Symposium on Industrial Electronics (ISIE) 2020
DOI: 10.1109/isie45063.2020.9152370
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating a Visual Simultaneous Localization and Mapping Solution on Embedded Platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Furthermore, it was observed that the Jetson Nano's consumption of power was considerably higher when utilizing the TensorFlow-GPU framework as opposed to TensorFlow-RT, despite similar inference times. In a related study, Silveira et al [18] benchmarked the Nvidia Jetson Nano against the RPI 3B+, employing the simultaneous localization and mapping (SLAM) algorithm. Their findings highlighted the nano's superior performance, achieving 12.6 frames per second (FPS) and 12.1 FPS across two datasets, in stark contrast to the RPi 3B+'s 4.4 FPS and 3.6 FPS, respectively.…”
Section: Iot Edge Devicesmentioning
confidence: 99%
“…Furthermore, it was observed that the Jetson Nano's consumption of power was considerably higher when utilizing the TensorFlow-GPU framework as opposed to TensorFlow-RT, despite similar inference times. In a related study, Silveira et al [18] benchmarked the Nvidia Jetson Nano against the RPI 3B+, employing the simultaneous localization and mapping (SLAM) algorithm. Their findings highlighted the nano's superior performance, achieving 12.6 frames per second (FPS) and 12.1 FPS across two datasets, in stark contrast to the RPi 3B+'s 4.4 FPS and 3.6 FPS, respectively.…”
Section: Iot Edge Devicesmentioning
confidence: 99%
“…Figure 16 seeks to illustrate the main differences between the ORB-SLAM algorithm (see Figure 9) and its visual-inertial version. The VIORB algorithm was the first visual-inertial method to employ map reuse, and it presents high-performance accuracy [64,70,72] and memory usage [18]. Nonetheless, the IMU initialization takes between 10 to 15 s [71], and no embedded implementations were found.…”
Section: Visual Inertial Orb-slam (2017)mentioning
confidence: 99%
“…Partitioning a complete localization method with loop closures requires high computing resources. In [155], ORB-SLAM2 [82], which integrates a loop closure module has been optimized using NEON instructions to take advantage of the advanced SIMD used in the ARM processors of the Raspberry Pi 3B+ and Jetson Nano. It achieves an average tracking time of 6.11 FPS on Raspberry Pi 3B+ and 9.64 FPS on Jetson Nano with input images at a 752×480 resolution.…”
Section: A Embedded Platforms For Localization and 3d Reconstructionmentioning
confidence: 99%
“…), poses (25ms), loop (120ms) i7-6820HK 2.7GHz, GeForce GTX 980 M (20 FPS) on Jetson TX2 (28 FPS)[154] [82] on Pi 3B+ (6 FPS), Jetson Nano (10 FPS)[155] CPU (19 FPS global opti., 128 FPS local opti.) 3173 (10 FPS visual-SLAM, 20 FPS lidar-SLAM) : UP Board (7 FPS), ODROID XU4 (7 FPS)[159] : UP Board (20 FPS), ODROID XU4 (20 FPS)[159] RAM, quadcore CPU, Tegra K1 GPU Tango mobile phone, 2GB RAM, quadcore CPU ElasticFusion 47 FPS), iPad Air 2 (21 FPS) DE5 PCIe board (44 FPS), DE1 FPGA SoC (2 FPS)[172] …”
mentioning
confidence: 99%