2021
DOI: 10.31219/osf.io/hrpxy
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attentive Max Feature Map for Acoustic Scene Classification with Joint Learning considering the Abstraction of Classes

Abstract: The attention mechanism has been widely adopted in acoustic scene classification. However, we find that during the process of attention exclusively emphasizing information, it tends to excessively discard information although improving the performance. We propose a mechanism referred to as the attentive max feature map which combines two effective techniques, attention and max feature map, to further elaborate the attention mechanism and mitigate the abovementioned phenomenon. Furthermore, we explore various j… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Ma et al (2018) and Zannone et al (2019) uses a partial variational autoencoder to predict the rest of features given the acquired ones to model the feature importance and uncertainity and combine it with acquisition policy to maximize information gain. Shim, Hwang, and Yang (2018) treats this as a joint learning problem trains both the classifier and RL agent together to learn when and which feature to acquire to increase classification accuracy while maintaining cost-efficiency. Li and Oliva (2021) reformulate Markov Decision Process (MDP) and learn a generative surrogate model to capture inter feature dependencies to aid RL agent with intermediate rewards and auxiliary information.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Ma et al (2018) and Zannone et al (2019) uses a partial variational autoencoder to predict the rest of features given the acquired ones to model the feature importance and uncertainity and combine it with acquisition policy to maximize information gain. Shim, Hwang, and Yang (2018) treats this as a joint learning problem trains both the classifier and RL agent together to learn when and which feature to acquire to increase classification accuracy while maintaining cost-efficiency. Li and Oliva (2021) reformulate Markov Decision Process (MDP) and learn a generative surrogate model to capture inter feature dependencies to aid RL agent with intermediate rewards and auxiliary information.…”
Section: Related Workmentioning
confidence: 99%
“…This is an oracular skyline since it uses the test label for optimization, and is an approximate ceiling on the performance achievable under any interactive policy. 4 Active Feature Acquisition (AFA): We also compare against the Active Feature Acquisition policy based on work by Shim, Hwang, and Yang (2018). We use image embeddings from ResNet18 pre-trained on ImageNet as auxiliary information, and ground truth concepts as features that can be acquired; we train an RL policy to actively acquire features/concepts as described in Shim, Hwang, and Yang (2018).…”
Section: Baselines For Comparisonmentioning
confidence: 99%
See 2 more Smart Citations