Computational Vision and Medical Image Processing V 2015
DOI: 10.1201/b19241-57
|View full text |Cite
|
Sign up to set email alerts
|

Navigation of robotics platform using advanced image processing navigation methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The DCM correction will be implemented to final version of a robotics platformnavigation of robotics platform using a fusion of visual odometry, Inertial Measurement Unit and mechanical odometers at every wheel. A very big advantage of our visual odometry is precision of position estimation [12]. The final device will be used as a fusion of navigation of the robotics platformwith an optical measurement device to create a 3D space map of dangerous spaces like a ruins, abandoned mines etc.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The DCM correction will be implemented to final version of a robotics platformnavigation of robotics platform using a fusion of visual odometry, Inertial Measurement Unit and mechanical odometers at every wheel. A very big advantage of our visual odometry is precision of position estimation [12]. The final device will be used as a fusion of navigation of the robotics platformwith an optical measurement device to create a 3D space map of dangerous spaces like a ruins, abandoned mines etc.…”
Section: Resultsmentioning
confidence: 99%
“…Equations (21) to (25) provide a recursive solution of a model which is described by the equations (11) and (12).…”
Section: X(t) Is State Vector In Time T Y(t) Is a Vector With Measurmentioning
confidence: 99%
“…The feature-based matching extracts distinguishing features from the range of images and uses corresponding features to calculate scan alignment. For most robots, the detection of closed loops is realized from camera data [14]. The most common functions of the detector are: SURF; SIFT; GLOH; Shape Context; PCA; Moments; Cross-correlation; and Steerable filters [9].…”
Section: Feature Based Registrationmentioning
confidence: 99%