2017
DOI: 10.1109/taslp.2017.2662479
|View full text |Cite
|
Sign up to set email alerts
|

Rhythm Transcription of Polyphonic Piano Music Based on Merged-Output HMM for Multiple Voices

Abstract: Abstract-In a recent conference paper, we have reported a rhythm transcription method based on a merged-output hidden Markov model (HMM) that explicitly describes the multiple-voice structure of polyphonic music. This model solves a major problem of conventional methods that could not properly describe the nature of multiple voices as in polyrhythmic scores or in the phenomenon of loose synchrony between voices. In this paper we present a complete description of the proposed model and develop an inference tech… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
59
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(63 citation statements)
references
References 24 publications
0
59
0
Order By: Relevance
“…The average (first, third quantiles) was 53.7% (43.8%, 67.3%) for the polyrhythmic data and 38.9% (28.2%, 47.3%) for the standard polyphony data. From the results we can find that methods using statistical learning tends to have better accuracies and an HMM-based model with multiple-voice structure [9] had the best accuracy for the polyrhythmic data. Example results and discussions will be given in the poster.…”
Section: Resultsmentioning
confidence: 95%
See 2 more Smart Citations
“…The average (first, third quantiles) was 53.7% (43.8%, 67.3%) for the polyrhythmic data and 38.9% (28.2%, 47.3%) for the standard polyphony data. From the results we can find that methods using statistical learning tends to have better accuracies and an HMM-based model with multiple-voice structure [9] had the best accuracy for the polyrhythmic data. Example results and discussions will be given in the poster.…”
Section: Resultsmentioning
confidence: 95%
“…There have been many studies on converting a music audio signal into a piano-roll representation based on acoustic modelling of musical sound (see [1,2] for reviews). To obtain a music score, we must recognise quantised note lengths (or note values) of the musical notes in piano rolls, which is called rhythm transcription [3][4][5][6][7][8][9].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Rhythm quantization methods receive note-track data or performed MIDI data (human performance recorded by a MIDI device) and output quantized MIDI data in which notes are associated with quantized onset and offset score times (in beats). Onset score times are usually estimated by removing temporal deviations in the input data, and approaches based on hand-crafted rules [10,11], statistical models [12][13][14][15][16][17][18], and a connectionist approach [19] have been studied. A recent study [18] has shown that methods based on hidden Markov models (HMMs) are currently state of the art.…”
Section: Introductionmentioning
confidence: 99%
“…Onset score times are usually estimated by removing temporal deviations in the input data, and approaches based on hand-crafted rules [10,11], statistical models [12][13][14][15][16][17][18], and a connectionist approach [19] have been studied. A recent study [18] has shown that methods based on hidden Markov models (HMMs) are currently state of the art. Especially, the metrical HMM [13,14] has the advantage of being able to estimate the metre and bar lines and avoid grammatically incorrect score representations (e.g.…”
Section: Introductionmentioning
confidence: 99%