2010
DOI: 10.1007/978-3-642-15549-9_49
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Recognize Objects from Unseen Modalities

Abstract: Abstract. In this paper we investigate the problem of exploiting multiple sources of information for object recognition tasks when additional modalities that are not present in the labeled training set are available for inference. This scenario is common to many robotics sensing applications and is in contrast with the assumption made by existing approaches that require at least some labeled examples for each modality. To leverage the previously unseen features, we make use of the unlabeled data to learn a map… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
20
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 22 publications
1
20
0
Order By: Relevance
“…Although effective, it was limited to transferring information between RGB models. Other approaches have been proposed to transfer generic information across modalities [8], but have only been shown with weak detection models. b) Region Proposals: We note that many topperforming supervised object detection methods [15] and weakly supervised methods [41], [24] rely on a good set of bottom-up bounding box object candidates.…”
Section: Related Workmentioning
confidence: 99%
“…Although effective, it was limited to transferring information between RGB models. Other approaches have been proposed to transfer generic information across modalities [8], but have only been shown with weak detection models. b) Region Proposals: We note that many topperforming supervised object detection methods [15] and weakly supervised methods [41], [24] rely on a good set of bottom-up bounding box object candidates.…”
Section: Related Workmentioning
confidence: 99%
“…12. We used the three types of precomputed pairwise distances provided on the web 6 of Christoudias et al (2010); for the details of the distances, refer to that paper. The classification accuracies are evaluated by three-fold cross validations and are shown in Table 8.…”
Section: Fig 11 Oxford Flower Datasetmentioning
confidence: 99%
“…The key conceptual difference is that domain adaptation seeks to apply the training examples in another modality, while image fusion seeks to create a new complementary modality. GP models have also featured in domain adaptation problems, for example to recognise objects in images that have a different resolution to the labelled training data [7].…”
Section: Related Workmentioning
confidence: 99%