2012
DOI: 10.1007/978-3-642-33247-0_5
|View full text |Cite
|
Sign up to set email alerts
|

MusiClef: Multimodal Music Tagging Task

Abstract: Abstract. MusiClef is a multimodal music benchmarking initiative that will be running a MediaEval 2012 Brave New Task on Multimodal Music Tagging. This paper describes the setup of this task, showing how it complements existing benchmarking initiatives and fosters less explored methodological directions in Music Information Retrieval. MusiClef deals with a concrete use case, encourages multimodal approaches based on these, and strives for transparency of results as much as possible. Transparency is encouraged … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…Therefore, we made a functional categorization of the tags (also released as part of the reference implementation, see [8] for further details) to allow a deeper result analysis, including categories related to affect, genre, sound quality, but also specific occasions or places for which the song would be appropriate.…”
Section: Musiclef 2012 Reference Codementioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we made a functional categorization of the tags (also released as part of the reference implementation, see [8] for further details) to allow a deeper result analysis, including categories related to affect, genre, sound quality, but also specific occasions or places for which the song would be appropriate.…”
Section: Musiclef 2012 Reference Codementioning
confidence: 99%
“…In 2012, the auto-tagging task was further refined, and run as a "MusiClef Multimodal Music Tagging" Brave New Task in the MediaEval 4 multimedia evaluation campaign [8,7], which formed the basis for the data set described in this paper. The autotagging task will be described in the following subsection.…”
Section: Motivationmentioning
confidence: 99%
“…the aggregation and integration of information in multiple languages, media, and coming from different domains, such as: semantic annotation and question answering in the biomedical domain; selecting success criteria in an academic library catalogue; finding similar content in different scenarios on the Web; interactive information retrieval and formative evaluation for medical professionals; microblog summarization and disambiguation; multimodal music tagging; multi-faceted IR in multimodal domains; ranking in faceted search [33,56,109,110,127,152,183,184,235,241,244,245,249]; results of a search system but also for improving interaction with and exploration of experimental outcomes such as exploiting visual analytics for failure analysis; comparing the relative performances of IR systems; and visualization for sentiment analysis [19,68,143,263];…”
Section: The Conferencementioning
confidence: 99%