2012
DOI: 10.1007/978-3-642-32060-6_44
|View full text |Cite
|
Sign up to set email alerts
|

A Low Cost Ground Truth Detection System for RoboCup Using the Kinect

Abstract: Abstract. Ground truth detection systems can be a crucial step in evaluating and improving algorithms for self-localization on mobile robots. Selecting a ground truth system depends on its cost, as well as on the detail and accuracy of the information it provides. In this paper, we present a low cost, portable and real-time solution constructed using the Microsoft Kinect RGB-D Sensor. We use this system to find the location of robots and the orange ball in the Standard Platform League (SPL) environment in the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…The work presented in this paper uses a different vision approach based on a depth sensor instead of an intensity sensor. This work presents several similarities with the work of Khandelwal et al [5] that uses a Kinect sensors as a low cost ground truth detection system. As 3D sensor, a Kinect was chosen given its low price (making it possible to use in an aggressive environment such as RoboCup), its ability to directly provide 3D depth information and its refresh rate of 30 fps similar to the RGB camera used in the onmi direction vision system of the robots [6].…”
Section: Introductionmentioning
confidence: 77%
“…The work presented in this paper uses a different vision approach based on a depth sensor instead of an intensity sensor. This work presents several similarities with the work of Khandelwal et al [5] that uses a Kinect sensors as a low cost ground truth detection system. As 3D sensor, a Kinect was chosen given its low price (making it possible to use in an aggressive environment such as RoboCup), its ability to directly provide 3D depth information and its refresh rate of 30 fps similar to the RGB camera used in the onmi direction vision system of the robots [6].…”
Section: Introductionmentioning
confidence: 77%
“…The position error is directly influenced by method of determining robot final position in the field. As it is pointed out in [9], determined position is highly affected by robots' orientation on the field. This is true for the position calculated by arithmetic mean of the robot point-cloud and the reason is that the position would be placed where the point-cloud has more density which is the region closer to the camera in its direction.…”
Section: Resultsmentioning
confidence: 99%
“…Due to the drawbacks of mentioned approaches and thanks to low cost depth sensors -such as Microsoft Kinect and Asus Xtion TM -[9] [10] have suggested using point-clouds for ground-truth detection system. The calibration process in [9] requires user to identify ground points and field's landmarks in order to calculate the camera transformation to the field reference. Moreover, color calibration must be performed to make the system able to tell the robots' team color.…”
Section: Related Workmentioning
confidence: 99%
“…The use of such knowledge is especially interesting for places with artificial landmarks or previously defined colour-coded objects, such as the RoboCup competition [130]. There are several examples of the application of this specific knowledge in RoboCup (e.g., [87], [131]). …”
Section: Feature Extractionmentioning
confidence: 99%
“…In addition, the problem of actively searching for an object in a 3D environment is studied under the constraint of a maximum search time using a visually guided humanoid robot with 26 degrees of freedom [5]. Another study follows the standard pattern recognition approach based on four main steps [131]: (i) preprocessing to achieve colour constancy and stereo pair calibration; (ii) segmentation using depth-continuity information; (iii) feature extraction based on visual saliency; and (iv) classification using a neural network. The main novelty of the approach lies in the feature extraction step, where the authors propose novel features derived from a visual saliency mechanism.…”
Section: What Should a Vision System In Gmrs Be Used For?mentioning
confidence: 99%