2017
DOI: 10.48550/arxiv.1705.09888
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 65 publications
0
1
0
Order By: Relevance
“…), however, when describing the appearance or motion of those objects, the idea of keyword-based semantic descriptors fails. Other aspects that limit the scope of searching for video content include the level of annotation at the clip level, which does not consider the frames or even the objects located within the sequence of frames (3,4) . If the spatial and temporal resolutions could be calculated effectively, it might help search for accurate video content.…”
Section: Introductionmentioning
confidence: 99%
“…), however, when describing the appearance or motion of those objects, the idea of keyword-based semantic descriptors fails. Other aspects that limit the scope of searching for video content include the level of annotation at the clip level, which does not consider the frames or even the objects located within the sequence of frames (3,4) . If the spatial and temporal resolutions could be calculated effectively, it might help search for accurate video content.…”
Section: Introductionmentioning
confidence: 99%