2023
DOI: 10.1109/tmm.2022.3161851
|View full text |Cite
|
Sign up to set email alerts
|

Theme Transformer: Symbolic Music Generation With Theme-Conditioned Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(24 citation statements)
references
References 39 publications
0
15
0
1
Order By: Relevance
“…MELONS [37] entails a multi-step generation method with Transformer-based models and graph representation for music structure generation and structure-conditional melody generation. Theme Transformer [30] achieves theme-based conditioning by producing it multiple times in the generation result so that the output music follows the thematic material. These models were constructed based on the structure of music from multiple levels (e.g., the motif level and phrase level).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…MELONS [37] entails a multi-step generation method with Transformer-based models and graph representation for music structure generation and structure-conditional melody generation. Theme Transformer [30] achieves theme-based conditioning by producing it multiple times in the generation result so that the output music follows the thematic material. These models were constructed based on the structure of music from multiple levels (e.g., the motif level and phrase level).…”
Section: Related Workmentioning
confidence: 99%
“…There is no significant difference between the music generated by the proposed model and the original humancomposed music (p=0.351), while there is a significant difference when comparing results generated by other models and original human-composed music (p<0.05), indicating a better music quality of the proposed model. 3.47 1.15 t=-6.01, p<0.001 CP-NC [13] 3.64 0.96 t=-4.74, p<0.001 PopMNet [33] 3.77 0.95 t=-4.41, p<0.001 Theme [30] 3.46 0.91 t=-7.37, p<0.001 MELONS [37] 3.89…”
Section: Subjective Evaluationmentioning
confidence: 99%
“…Originated from the machine translation tasks, Transformer [80] has received great attention from broader research communities beyond natural language processing [24,30,31,74,85]. ViT [16] is marked as the first successful attempt on implementing transformer model into computer vision tasks, which simply splits the whole image into patches and feeds them into Transformer encoders with multihead attention mechanisms.…”
Section: Vision Transformermentioning
confidence: 99%
“…Desde una visión tecnológica del dominio de la IA, puede decirse que los sistemas inteligentes de generación musical simbólica han evolucionado de manera rica y diversa a la par con los principales métodos de análisis informático en los diversos períodos de este dominio. Inicialmente, dichos sistemas han estado fundamentados en métodos de IA basada en grafos y probabilidades (Hiller y Baker, 1964;Ames, 1989) que aún se mantienen vivos (Catak et al, 2021;Glines, 2022) y que se expanden y fusionan hoy en día con técnicas contemporáneas de redes neuronales (Briot et al, 2020) como LSTM (Eck y Schmidhuber, 2002;Privato et al, 2022) o transformers (Shih et al, 2022).…”
Section: Introductionunclassified