2021
DOI: 10.1007/978-3-030-72240-1_60
|View full text |Cite
|
Sign up to set email alerts
|

LEMONS: Listenable Explanations for Music recOmmeNder Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…One line of work proposes listenable explanations [10], inspired from radio shows in which hosts provide information about played tracks for creating transitions. Alternatively, item parts such as track snippets focusing on a particular audio source (e. g., instrument or voice [72]) can be emphasized as reasons for recommendation.…”
Section: Overview Of Explanation Methods For Mrssmentioning
confidence: 99%
See 1 more Smart Citation
“…One line of work proposes listenable explanations [10], inspired from radio shows in which hosts provide information about played tracks for creating transitions. Alternatively, item parts such as track snippets focusing on a particular audio source (e. g., instrument or voice [72]) can be emphasized as reasons for recommendation.…”
Section: Overview Of Explanation Methods For Mrssmentioning
confidence: 99%
“…Other than textual modalities, explanations in MRSs include displaying album covers, which may convey information about the style or even allow to recognize record labels (e. g., Deutsche Grammophon, Blue Note). Short audio thumbnails are also a promising way to provide explanations that cannot be otherwise expressed with words [72].…”
Section: Example-based Explanationsmentioning
confidence: 99%
“…SLIME was demonstrated on the task of singing voice detection [18] and used for analysing a replay spoofing detection system [6]. Haunschmid et al used other types of interpretable features (super pixels [7], source sep-aration estimates [8,9]) for explaining the predictions of a variety of models, including music taggers [8,9] and a content-based music recommender system [10]. Mishra et al [11] proposed different content types for replacing the "grayed out" segments (e.g.…”
Section: Local Interpretable Model-agnostic Explanationsmentioning
confidence: 99%
“…A plethora of explanation methods ("explainers") have been originally developed for text or image data and adapted to the audio domain [1,2], or specifically introduced for MIR systems [3]. Most notably, different versions of Local Interpretable Model-agnostic Explanations (LIME), a posthoc explainer [4], have been used to explain models in a variety of MIR tasks [5][6][7][8][9][10][11].…”
Section: Introductionmentioning
confidence: 99%
“…Previous work on interpretability in MIR has dealt with tasks such as music tagging using self-attention [4] and transcription using invertible neural networks [5], and post-hoc explanations for music content analysis have been used to understand what a genre classifier [6] or a singing voice detector [7][8][9][10][11] have learnt. More recently, audioLIME has been proposed [12,13] and has shown promise in explaining tagging models [14] as well as recommendation models [15].…”
Section: Introductionmentioning
confidence: 99%