2015 European Conference on Mobile Robots (ECMR) 2015
DOI: 10.1109/ecmr.2015.7324214
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: Abstract-In this work we present the Object Labeling Toolkit (OLT), a set of software components publicly available for helping in the management and labeling of sequential RGB-D observations collected by a mobile robot. Such a robot can be equipped with an arbitrary number of RGB-D devices, possibly integrating other sensors (e.g. odometry, 2D laser scanners, etc.). OLT first merges the robot observations to generate a 3D reconstruction of the scene from which object segmentation and labeling is conveniently … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

2015
2015
2017
2017

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 25 publications
(29 reference statements)
0
8
0
1
Order By: Relevance
“…Therefore, a significant, growing body of current research aiming to overcome this issue is considering contextual information of the scene objects in addition to their usually employed individual features, and a number of applications dealing with this source of information have came out, e.g. Wong et al (2015) or Ruiz-Sarmiento, J.R. et al (2015b). Some works have attempted to exploit this information by providing ad-hoc or preliminary solutions, like in Mekhalfi et al (2015), where the co-occurrence of objects appearing in distinct types of rooms are implicitly modelled.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, a significant, growing body of current research aiming to overcome this issue is considering contextual information of the scene objects in addition to their usually employed individual features, and a number of applications dealing with this source of information have came out, e.g. Wong et al (2015) or Ruiz-Sarmiento, J.R. et al (2015b). Some works have attempted to exploit this information by providing ad-hoc or preliminary solutions, like in Mekhalfi et al (2015), where the co-occurrence of objects appearing in distinct types of rooms are implicitly modelled.…”
Section: Related Workmentioning
confidence: 99%
“…This tool is fed with both the recorded sequence and the labeled, reconstructed map (obtained as described in the previous section) in order to automatically propagate the ground truth information to the RGB-D observations. The outcome of this process is a per-pixel labeling of the intensity and depth images within each observation, as well as a per-point labeling of its point cloud data (please refer to Ruiz-Sarmiento et al (2015c) for further information). The last row of Figure 8 depicts depth images colored according to the propagated ground truth labels.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…Processed data have been produced employing two software tools, namely the aforementioned Mobile Robot Programming Toolkit (MRPT), and the Object Labeling Toolkit (OLT) (Ruiz-Sarmiento et al, 2015c). OLT comprises a set of public tools 6 aimed at helping in the management and labeling of sequential RGB-D observations.…”
Section: Processed Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, 3D robust recognition of everyday objects have been achieved by applying different machine learning techniques: e.g., depth kernel descriptors (Bo et al, 2011b), hierarchical kernel descriptors (Bo et al, 2011a), sparse distance learning (Lai et al, 2011a), scalable and hierarchical recognition (Lai et al, 2011b) or multi-scene analysis (Herbst et al, 2011b). Among other things, RGB-D cameras have been used to recognize human poses (Shotton et al, 2011), to build and maintain semantic maps of scenes using probabilistic graphical models for recognizing objects and rooms (Ruiz-Sarmiento, 2016;Ruiz-Sarmiento et al, 2015), etc.…”
Section: Introductionmentioning
confidence: 99%