2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM) 2019
DOI: 10.1109/bigmm.2019.00-12
|View full text |Cite
|
Sign up to set email alerts
|

ViTag: Automatic Video Tagging Using Segmentation and Conceptual Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…There are diverse methods used by researchers for image and video tagging. Most of the methods used a supervised approach, as in [4] [5] [7] [12] [14] [16] [19] [20], whereas few researchers adopted an unsupervised approach, as in [6] [8] [15] [25]. Both the supervised and unsupervised approaches used for image and video tagging leveraged different types of features.…”
Section: Discussionmentioning
confidence: 99%
“…There are diverse methods used by researchers for image and video tagging. Most of the methods used a supervised approach, as in [4] [5] [7] [12] [14] [16] [19] [20], whereas few researchers adopted an unsupervised approach, as in [6] [8] [15] [25]. Both the supervised and unsupervised approaches used for image and video tagging leveraged different types of features.…”
Section: Discussionmentioning
confidence: 99%
“…To the best of our knowledge, no state-of-the-art models suggest both tags and thumbnails relying on a sole algorithm; • (ii) as the tags are suggested considering the latest relevant popular cultural heritage topics, unlike most of the state-of-the-art tools (e.g., the proposal by Jin et al [11]), our proposal can constantly revitalize the associated tags; • (iii) the option, for the final user, of setting a trade-off between quality and quantity of suggested items. Unlike the current tools (e.g., the work of Patwardhan et al [19]), the final user can decide if it would be worth enriching a gem with many tags/thumbnails (with the risk of assigning less relevant items) or preferring to select only relevant tags/thumbnails having a confidence score higher than a certain threshold, assuming the risk of not retrieving any items (if there are no items with the confidence score higher than the fixed threshold).…”
Section: Authors and Reference Title Main Topicmentioning
confidence: 99%