2010
DOI: 10.2478/v10006-010-0021-7
|View full text |Cite
|
Sign up to set email alerts
|

Visual simultaneous localisation and map-building supported by structured landmarks

Abstract: Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…The cyber layer makes the calculations according to the customer order and the sensor data to produce appropriate commands for machines in physical layer. After the production of product, it is shipped by autonomous vehicles, which is a study field of smart city [10,11]. Hereby, the production process from ordering to shipping can be done without any human interaction.…”
Section: Methodsmentioning
confidence: 99%
“…The cyber layer makes the calculations according to the customer order and the sensor data to produce appropriate commands for machines in physical layer. After the production of product, it is shipped by autonomous vehicles, which is a study field of smart city [10,11]. Hereby, the production process from ordering to shipping can be done without any human interaction.…”
Section: Methodsmentioning
confidence: 99%
“…Although this assumption was also removed in some earlier 2-D self-localization solutions (Skrzypczyński, 2009), in walking and humanoid robots or micro aerial vehicles (Engel et al, 2012) reliable odometry from proprioceptive sensing is not available at all or is extremely poor, making these robots dependent on self-localization with exteroceptive sensors. Passive vision has many practical limitations (Davison et al, 2007;Bączyk and Kasiński, 2010), whereas 3-D laser range finders with mechanical scanning are bulky, heavy, and often slow. Thus, compact, fast-frame-rate RGB-D cameras are the sensors of choice…”
Section: Introductionmentioning
confidence: 99%
“…The acquired data allows reliable benchmarking of the algorithms performing the task of simultaneous localization and mapping (SLAM) [21] [22] or visual odometry [23] [24]. As shown in [25], the presence of additional structured markers placed throughout the environment improves the accuracy of navigation in single robot scenarios. We believe that the introduction of additional unique artificial markers associated with individual robots, as presented in this article, will allow us to improve the accuracy in scenarios in which external cameras are used or in the case of collaborative SLAM.…”
Section: Introductionmentioning
confidence: 99%