2021
DOI: 10.48550/arxiv.2107.09142
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sequence-to-Sequence Piano Transcription with Transformers

Abstract: Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets. However, these models have required extensive domain-specific design of network architectures, input/output representations, and complex decoding schemes. In this work, we show that equivalent performance can be achieved using a generic encoderdecoder Transformer with standard decoding methods. We demonstrate that the model can learn to translate spectrogram inputs directly to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…Our model outperforms other baselines specifically optimized for low-resource datasets, such as Cheuk et al (2021), while also remaining competitive with or outperforming models tuned for large single-instrument datasets, i.e. Hawthorne et al (2021).…”
Section: Resultsmentioning
confidence: 71%
See 4 more Smart Citations
“…Our model outperforms other baselines specifically optimized for low-resource datasets, such as Cheuk et al (2021), while also remaining competitive with or outperforming models tuned for large single-instrument datasets, i.e. Hawthorne et al (2021).…”
Section: Resultsmentioning
confidence: 71%
“…The work most closely related to ours is Hawthorne et al (2021), which uses an encoder-decoder Transformer architecture to transcribe solo piano recordings. Here, we extend their approach to transcribe polyphonic music with an arbitrary number of instruments.…”
Section: Music Transcriptionmentioning
confidence: 99%
See 3 more Smart Citations