Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2009
DOI: 10.1145/1571941.1572009
|View full text |Cite
|
Sign up to set email alerts
|

Combining audio content and social context for semantic music discovery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(31 citation statements)
references
References 20 publications
0
29
0
Order By: Relevance
“…We also want to compare the performance of the proposed method with that of the conventional methods [9,11] and apply to various challenging real-world problems e.g., multi-modal event correlation anal- …”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We also want to compare the performance of the proposed method with that of the conventional methods [9,11] and apply to various challenging real-world problems e.g., multi-modal event correlation anal- …”
Section: Discussionmentioning
confidence: 99%
“…To cope with this problem, most previous works have tried to automatically associate sounds with words for queryby-text retrieval or music annotation [4,5,6,7,8,9,10,11]. Recently, inference techniques based on topic models, such as probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA), have been exploited for automatic image annotation and retrieval [12,13].…”
Section: Introductionmentioning
confidence: 99%
“…"happy" or "rock") to refer to music. Semantic/tag-based or category-based retrieval systems such as the ones proposed by Knees et al [125] or Turnbull et al [278] rely on methods for the estimation of semantic labels from music. This retrieval scenario is characterized by a low specificity and long-term granularity.…”
Section: Music Retrievalmentioning
confidence: 99%
“…Few methods have been proposed that combine the information from both acoustic content and social context. In [13] timbre and harmonic features are used to represent acoustic content while social tags and web documents represent social context. Similar combining approaches are used in [14] for multi-label music style classification and in [15] where a track's significant musical content or musword is considered along with social tags.…”
Section: Related Workmentioning
confidence: 99%