2014 IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE 11th Intl Conf on Autonomic and Trusted Computi 2014
DOI: 10.1109/uic-atc-scalcom.2014.79
|View full text |Cite
|
Sign up to set email alerts
|

Mobiscan3D: A Low Cost Framework for Real Time Dense 3D Reconstruction on Mobile Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Mostly, all state-of-the-art SLAM systems [12][13][14][15] and reconstruction methods using IMUs [16,17] rely on the pose-graph/ factorgraph optimization [18,19] or bundle adjustment. In the following section we will review the related work on object-SLAM and discuss some limitations in them and the keypoint based approach which motivated for the proposed approach.…”
Section: Related Workmentioning
confidence: 99%
“…Mostly, all state-of-the-art SLAM systems [12][13][14][15] and reconstruction methods using IMUs [16,17] rely on the pose-graph/ factorgraph optimization [18,19] or bundle adjustment. In the following section we will review the related work on object-SLAM and discuss some limitations in them and the keypoint based approach which motivated for the proposed approach.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, smart phones are used for image acquisition due to its low cost and easy availability. So researchers used smart phones sensors like accelerometer, magnetometer for data collection and 3D reconstruction, it reduces computation [11,12] and few works such as [13,14] have accomplished this, but the output is noisy due to a fast and course reconstruction. A system capable of dense 3D reconstruction of an unknown environment in real-time through a mobile robot requires simultaneous localization and mapping (SLAM) [15].…”
Section: Machine Visionmentioning
confidence: 99%
“…The captured information is pushed back to a backend server from where user is controlling the robot. Odometry and IMU sensor data are used for robot localization along with camera pose estimation [13]. Multi-view geometry [32] is used for creating a 3D map [11] of the environment.…”
Section: Proposed Integrated Sensing Systemmentioning
confidence: 99%
“…Having a depth of the scene computed from RGB [6] or using both RGB and IMU [7]. [8] may help but still associating the depth an d language is a challenge. Firstly, communicating the exact turn angle towards the goal only through natural language command is very difficult for any person, especially in cases where the robot may be oriented in any direction(towards the person in our case) hence predefined discrete actions may not be of much help in many of the indoor scenarios.…”
Section: Introductionmentioning
confidence: 99%