Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
DOI: 10.1109/icassp.2005.1416352
|View full text |Cite
|
Sign up to set email alerts
|

Classifying User Environment for Mobile Applications using Linear Autoencoding of Ambient Audio

Abstract: Many mobile devices and applications can act in contextsensitive ways, but rely on explicit human action for context awareness. It would be preferable if our devices were able to attain context awareness without human intervention. One important aspect of user context is environment. We present a novel method for classifying environment types based on acoustic signals. This method makes use of linear autoencoding neural networks, and is motivated by the observation that biological coding systems seem to be hea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 14 publications
(16 reference statements)
0
15
0
Order By: Relevance
“…The autoencoder and GMM achieved 77.9 % and 77.57 % accuracy, respectively, in the experiments reported in [10], while a hybrid system between them provided 80.05 % accuracy. MFCC and 11-state HMMs gave 91.5 % average accuracy for 14 classes in [11].…”
Section: Resultsmentioning
confidence: 95%
See 3 more Smart Citations
“…The autoencoder and GMM achieved 77.9 % and 77.57 % accuracy, respectively, in the experiments reported in [10], while a hybrid system between them provided 80.05 % accuracy. MFCC and 11-state HMMs gave 91.5 % average accuracy for 14 classes in [11].…”
Section: Resultsmentioning
confidence: 95%
“…Malkin and Waibel [10] introduced linear autoencoding neural networks for classifying environment. The autoencoder is a standard feed-forward neural network with a linear transform function.…”
Section: Classifiermentioning
confidence: 99%
See 2 more Smart Citations
“…Sixty-four dimensional MFCC, plus the spectral centroid were used as features in (Malkin and Waibel, 2005). They used forensic-application-like audio files, where both ambient, i.e., environmental sound and human speech were present.…”
Section: Introductionmentioning
confidence: 99%