Abstract:Making sense of the surrounding context and ongoing
events through not only the visual inputs but
also acoustic cues is critical for various AI applications.
This paper presents an attempt to learn
a neural network model that recognizes more than
500 different sound events from the audio part of
user generated videos (UGV). Aside from the large
number of categories and the diverse recording conditions
found in UGV, the task is challenging because
a sound event may occur only for a short
period of tim… Show more
“…Our system closely matches the system of Yu et al [23], and outperforms all other systems by a large margin. [27,9] 1M 0.314 0.959 2.452 Kumar [28] 22k 0.213 0.927 Shah [18] 22k 0.229 0.927 Wu [29] 22k 0.927 Kong [22] 2M 0.327 0.965 2.558 Yu [23] 2M 0.360 0.970 2.660 Chen [30] 600k 0.316 Chou [31] 1M 0.327 0.951 [23] uses multi-level attention: attention layers are built upon multiple hidden layers, whose outputs are concatenated and further processed by a fully connected layer to yield a recording-level prediction. No frame-level predictions at all are made in this process.…”
Section: Talnet: Joint Tagging and Localization On Audio Setmentioning
Sound event detection (SED) entails two subtasks: recognizing what types of sound events are present in an audio stream (audio tagging), and pinpointing their onset and offset times (localization). In the popular multiple instance learning (MIL) framework for SED with weak labeling, an important component is the pooling function. This paper compares five types of pooling functions both theoretically and experimentally, with special focus on their performance of localization. Although the attention pooling function is currently receiving the most attention, we find the linear softmax pooling function to perform the best among the five. Using this pooling function, we build a neural network called TALNet. It is the first system to reach state-of-the-art audio tagging performance on Audio Set, while exhibiting strong localization performance on the DCASE 2017 challenge at the same time.
“…Our system closely matches the system of Yu et al [23], and outperforms all other systems by a large margin. [27,9] 1M 0.314 0.959 2.452 Kumar [28] 22k 0.213 0.927 Shah [18] 22k 0.229 0.927 Wu [29] 22k 0.927 Kong [22] 2M 0.327 0.965 2.558 Yu [23] 2M 0.360 0.970 2.660 Chen [30] 600k 0.316 Chou [31] 1M 0.327 0.951 [23] uses multi-level attention: attention layers are built upon multiple hidden layers, whose outputs are concatenated and further processed by a fully connected layer to yield a recording-level prediction. No frame-level predictions at all are made in this process.…”
Section: Talnet: Joint Tagging and Localization On Audio Setmentioning
Sound event detection (SED) entails two subtasks: recognizing what types of sound events are present in an audio stream (audio tagging), and pinpointing their onset and offset times (localization). In the popular multiple instance learning (MIL) framework for SED with weak labeling, an important component is the pooling function. This paper compares five types of pooling functions both theoretically and experimentally, with special focus on their performance of localization. Although the attention pooling function is currently receiving the most attention, we find the linear softmax pooling function to perform the best among the five. Using this pooling function, we build a neural network called TALNet. It is the first system to reach state-of-the-art audio tagging performance on Audio Set, while exhibiting strong localization performance on the DCASE 2017 challenge at the same time.
“…Attention neural networks have been proposed for AudioSet tagging in [15,16]. Later, a cliplevel and segment-level model with attention supervision was proposed in [36].…”
Section: Audio Tagging With Weakly Labelled Datamentioning
Audio tagging is the task of predicting the presence or absence of sound classes within an audio clip. Previous work in audio tagging focused on relatively small datasets limited to recognising a small number of sound classes. We investigate audio tagging on AudioSet, which is a dataset consisting of over 2 million audio clips and 527 classes. AudioSet is weakly labelled, in that only the presence or absence of sound classes is known for each clip, while the onset and offset times are unknown. To address the weakly-labelled audio tagging problem, we propose attention neural networks as a way to attend the most salient parts of an audio clip. We bridge the connection between attention neural networks and multiple instance learning (MIL) methods, and propose decision-level and feature-level attention neural networks for audio tagging. We investigate attention neural networks modelled by different functions, depths and widths. Experiments on AudioSet show that the feature-level attention neural network achieves a state-of-the-art mean average precision (mAP) of 0.369, outperforming the best multiple instance learning (MIL) method of 0.317 and Google's deep neural network baseline of 0.314. In addition, we discover that the audio tagging performance on AudioSet embedding features has a weak correlation with the number of training samples and the quality of labels of each sound class.
“…Strongly supervised attention loss: Existing attention models for weakly labeled AEC are usually trained by minimizing loss between clip-level labels, y ∈ {0, 1} C , and the clip-level predictions [12], [13]. The attention matrix learned in this process will focus on the most relevant and discriminative parts of the audio clip for prediction.…”
“…Recently, attention scheme has been applied to weakly labeled AEC. The attention mechanism helps a model focus on subsections of audio which contribute to the classification while ignoring the irrelevant instances such as background noises [11], [12].…”
We describe a novel weakly labeled Audio Event Classification approach based on a self-supervised attention model. The weakly labeled framework is used to eliminate the need for expensive data labeling procedure and self-supervised attention is deployed to help a model distinguish between relevant and irrelevant parts of a weakly labeled audio clip in a more effective manner compared to prior attention models. We also propose a highly effective strongly supervised attention model when strong labels are available. This model also serves as an upper bound for the self-supervised model. The performances of the model with self-supervised attention training are comparable to the strongly supervised one which is trained using strong labels. We show that our self-supervised attention method is especially beneficial for short audio events. We achieve 8.8% and 17.6% relative mean average precision improvements over the current state-of-the-art systems for SL-DCASE-17 and balanced AudioSet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.