2016
DOI: 10.1007/s11042-016-3659-9
|View full text |Cite
|
Sign up to set email alerts
|

Mobile multi-view object image search

Abstract: High user interaction capability of mobile devices can help improve the accuracy of mobile visual search systems. At query time, it is possible to capture multiple views of an object from different viewing angles and at different scales with the mobile device camera to obtain richer information about the object compared to a single view and hence return more accurate results. Motivated by this, we propose a new multi-view visual query model on multi-view object image databases for mobile visual search. Multi-v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Convolutional Network Research With the development of social networks in recent years, it is easier to obtain text information related to the image from the user, such as comments and tags, from users during the process of sharing pictures on the Internet [12]. Artificial annotation semantics technology allows the extraction of semantic concepts related to images from textual information.…”
Section: Cross-modal Information Retrieval Andmentioning
confidence: 99%
“…Convolutional Network Research With the development of social networks in recent years, it is easier to obtain text information related to the image from the user, such as comments and tags, from users during the process of sharing pictures on the Internet [12]. Artificial annotation semantics technology allows the extraction of semantic concepts related to images from textual information.…”
Section: Cross-modal Information Retrieval Andmentioning
confidence: 99%
“…To assess the capability of deep CNN features in retrieving color images in response to partially colored sketches, we collected more than 35,000 color images from various datasets, corresponding to the 250 categories of TU Berlin sketches dataset. These images were gathered from Corel-10k dataset [ 31 ], Multi-view objects dataset [ 32 ], and Caltech256 [ 33 ].…”
Section: Experiments and Resultsmentioning
confidence: 99%