2018
DOI: 10.1007/s00521-018-3708-6
|View full text |Cite
|
Sign up to set email alerts
|

Conditional neural sequence learners for generating drums’ rhythms

Abstract: Considering music as a sequence of events with multiple complex dependencies, the Long Short-Term Memory (LSTM) architecture has proven very efficient in learning and reproducing musical styles. However, the generation of rhythms requires additional information regarding musical structure and accompanying instruments. In this paper we present DeepDrum, an adaptive Neural Network capable of generating drum rhythms under constraints imposed by Feed-Forward (Conditional) Layers which contain musical parameters al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…We have five distinct symbols: two indicating the first two bars of a phrase, two for the last two bars and one more descriptor for the rest bars. These indicators allow the model to learn the initial and final events of a phrase, an approach that has been successfully used in the task of melodic harmonisation [33] and drums generation [20].…”
Section: A Encoder Representation -Controllable By the Usermentioning
confidence: 99%
See 1 more Smart Citation
“…We have five distinct symbols: two indicating the first two bars of a phrase, two for the last two bars and one more descriptor for the rest bars. These indicators allow the model to learn the initial and final events of a phrase, an approach that has been successfully used in the task of melodic harmonisation [33] and drums generation [20].…”
Section: A Encoder Representation -Controllable By the Usermentioning
confidence: 99%
“…We use the calculated valence as a high-level conditioning feature, along with others (e.g., time signature and grouping indicators inspired by [20]), in our proposed generative lead sheet system based on sequence-to-sequence architectures (LSTM [21] and Transformer [22]). Another novel aspect of our approach is a unique strategy to include the high-level user conditions as the encoder input, whereas the musical events of the lead sheet are predicted in the decoder.…”
Section: Introductionmentioning
confidence: 99%
“…In sequential systems (e.g. as the one presented by Makris et al (2019) ), the decision for each note depends only on previous notes, with additional potential constraints. In non-sequential systems (e.g., as Deep Bach; Hadjeres et al (2017) ), new notes are inserted by sampling, forming “dynamic” constraints for notes that are inserted later on, regardless of time precedence – i.e.…”
Section: Motivation Research Questions and Contributionmentioning
confidence: 99%
“…In [9], Markris et al proposed an architecture with stacked LSTM layers conditioned with a Feedforward layer to generate rhythm patterns. The Feedforward layer takes information on accompanying bass lines and the position within a bar.…”
Section: Related Work 21 Rhythm Generationmentioning
confidence: 99%
“…Gillick et al also provide Ableton Live plugin 9 2 based on the method, which allows users to generate new rhythms within the Ableton Live DAW environment. However, users cannot train their own models, and users' control over generated patterns is limited.…”
Section: Related Work 21 Rhythm Generationmentioning
confidence: 99%