Interspeech 2021 2021
DOI: 10.21437/interspeech.2021-1837
|View full text |Cite
|
Sign up to set email alerts
|

An Evaluation of Data Augmentation Methods for Sound Scene Geotagging

Abstract: Sound scene geotagging is a new topic of research which has evolved from acoustic scene classification. It is motivated by the idea of audio surveillance. Not content with only describing a scene in a recording, a machine which can locate where the recording was captured would be of use to many. In this paper we explore a series of common audio data augmentation methods to evaluate which best improves the accuracy of audio geotagging classifiers.Our work improves on the state-of-the-art city geotagging method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 9 publications
(5 reference statements)
0
3
0
Order By: Relevance
“…Optimizing the parameters used for data augmentation and the transforms involved within is one such proposal. Most likely, as backed up by the exposure gained through this project, it would be beneficial to try grid search for this task [5], [6].…”
Section: Discussionmentioning
confidence: 99%
“…Optimizing the parameters used for data augmentation and the transforms involved within is one such proposal. Most likely, as backed up by the exposure gained through this project, it would be beneficial to try grid search for this task [5], [6].…”
Section: Discussionmentioning
confidence: 99%
“…3. Dropout : Short time segments from the audio sample are randomly replaced with 0 values (Bear et al., 2021).…”
Section: Methodsmentioning
confidence: 99%
“…4. Stretch : Time streching is applied to the samples by resizing a random number of columns at a random position and then by resizing the image using a bilinear interpolation (Bear et al., 2021).…”
Section: Methodsmentioning
confidence: 99%