“…Yu et al [10] proposed constructing a network with an auxiliary classification based on onset groups and instrument families to generate valuable training data. In another study by using convolutional recurrent neural networks (CRNN), predominant instrument recognition in polyphonic music was addressed [9]. Hung et al (2019) introduce multi-task learning for instrument recognition.…”
Section: Instrument Recognitionmentioning
confidence: 99%
“…In their work, they propose a method to recognize both pitches and instruments [16]. To augment the data, they employed a Wave Generative Adversarial Network (WaveGAN) architecture to generate audio files [7][8][9]. These approaches demonstrate the utilization of various techniques, including feature extraction, deep learning, image processing, and data augmentation, to improve instrument recognition accuracy and handle challenges such as low-quality recordings and polyphonic music.…”
Section: Instrument Recognitionmentioning
confidence: 99%
“…In the first stage, data augmentation techniques like Generative Adversarial Networks (GANs) are utilized to generate additional training data. In the second stage, deep neural networks such as Convolutional Neural Networks (CNNs), Gate Recurrent Units (GRUs), Convolutional Gate Recurrent Units (Conv GRUs), and Transformers are employed to map the augmented data to instrument labels [7][8][9]. Additionally, researchers have explored label augmentation, which involves constructing auxiliary classifiers based on additional labels introduced within a multi-task learning framework.…”
Instrument recognition is a critical task in the field of music information retrieval and deep neural networks have become the dominant models for this task due to their effectiveness. Recently, incorporating data augmentation methods into deep neural networks has been a popular approach to improve instrument recognition performance. However, existing data augmentation processes are always based on simple instrument spectrogram representation and are typically independent of the predominant instrument recognition process. This may result in a lack of coverage for certain required instrument types, leading to inconsistencies between the augmented data and the specific requirements of the recognition model. To build more expressive instrument representation and address this inconsistency, this paper constructs a combined two-channel representation for further capturing the unique rhythm patterns of different types of instruments and proposes a new predominant instrument recognition strategy called Augmentation Embedded Deep Convolutional neural Network (AEDCN). AEDCN adds two fully connected layers into the backbone neural network and integrates data augmentation directly into the recognition process by introducing a proposed Adversarial Embedded Conditional Variational AutoEncoder (ACEVAE) between the added fully connected layers of the backbone network. This embedded module aims to generate augmented data based on designated labels, thereby ensuring its compatibility with the predominant instrument recognition model. The effectiveness of the combined representation and AEDCN is validated through comparative experiments with other commonly used deep neural networks and data augmentation-based predominant instrument recognition methods using a polyphonic music recognition dataset. The results demonstrate the superior performance of AEDCN in predominant instrument recognition tasks.
“…Yu et al [10] proposed constructing a network with an auxiliary classification based on onset groups and instrument families to generate valuable training data. In another study by using convolutional recurrent neural networks (CRNN), predominant instrument recognition in polyphonic music was addressed [9]. Hung et al (2019) introduce multi-task learning for instrument recognition.…”
Section: Instrument Recognitionmentioning
confidence: 99%
“…In their work, they propose a method to recognize both pitches and instruments [16]. To augment the data, they employed a Wave Generative Adversarial Network (WaveGAN) architecture to generate audio files [7][8][9]. These approaches demonstrate the utilization of various techniques, including feature extraction, deep learning, image processing, and data augmentation, to improve instrument recognition accuracy and handle challenges such as low-quality recordings and polyphonic music.…”
Section: Instrument Recognitionmentioning
confidence: 99%
“…In the first stage, data augmentation techniques like Generative Adversarial Networks (GANs) are utilized to generate additional training data. In the second stage, deep neural networks such as Convolutional Neural Networks (CNNs), Gate Recurrent Units (GRUs), Convolutional Gate Recurrent Units (Conv GRUs), and Transformers are employed to map the augmented data to instrument labels [7][8][9]. Additionally, researchers have explored label augmentation, which involves constructing auxiliary classifiers based on additional labels introduced within a multi-task learning framework.…”
Instrument recognition is a critical task in the field of music information retrieval and deep neural networks have become the dominant models for this task due to their effectiveness. Recently, incorporating data augmentation methods into deep neural networks has been a popular approach to improve instrument recognition performance. However, existing data augmentation processes are always based on simple instrument spectrogram representation and are typically independent of the predominant instrument recognition process. This may result in a lack of coverage for certain required instrument types, leading to inconsistencies between the augmented data and the specific requirements of the recognition model. To build more expressive instrument representation and address this inconsistency, this paper constructs a combined two-channel representation for further capturing the unique rhythm patterns of different types of instruments and proposes a new predominant instrument recognition strategy called Augmentation Embedded Deep Convolutional neural Network (AEDCN). AEDCN adds two fully connected layers into the backbone neural network and integrates data augmentation directly into the recognition process by introducing a proposed Adversarial Embedded Conditional Variational AutoEncoder (ACEVAE) between the added fully connected layers of the backbone network. This embedded module aims to generate augmented data based on designated labels, thereby ensuring its compatibility with the predominant instrument recognition model. The effectiveness of the combined representation and AEDCN is validated through comparative experiments with other commonly used deep neural networks and data augmentation-based predominant instrument recognition methods using a polyphonic music recognition dataset. The results demonstrate the superior performance of AEDCN in predominant instrument recognition tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.