2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2017
DOI: 10.1109/asru.2017.8268915
|View full text |Cite
|
Sign up to set email alerts
|

Tackling unseen acoustic conditions in query-by-example search using time and frequency convolution for multilingual deep bottleneck features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Regarding the features used for query/utterance representation, Gaussian posteriorgrams are employed in [22,29,40,41]; an i-vector-based approach for feature extraction is proposed in [42]; phone log-likelihood ratio-based features are used in [43]; posteriorgrams derived from various unsupervised tokenizers, supervised tokenizers, and semi-supervised tokenizers are employed in [44]; and posteriorgrams derived from a Gaussian mixture model (GMM) tokenizer, phoneme recognition, and acoustic segment modeling are used in [45]. Phoneme posteriorgrams have been widely used [34,41,[46][47][48][49][50][51][52][53][54] and bottleneck features as well [34,[55][56][57][58][59][60].…”
Section: Methods Based On Template Matchingmentioning
confidence: 99%
“…Regarding the features used for query/utterance representation, Gaussian posteriorgrams are employed in [22,29,40,41]; an i-vector-based approach for feature extraction is proposed in [42]; phone log-likelihood ratio-based features are used in [43]; posteriorgrams derived from various unsupervised tokenizers, supervised tokenizers, and semi-supervised tokenizers are employed in [44]; and posteriorgrams derived from a Gaussian mixture model (GMM) tokenizer, phoneme recognition, and acoustic segment modeling are used in [45]. Phoneme posteriorgrams have been widely used [34,41,[46][47][48][49][50][51][52][53][54] and bottleneck features as well [34,[55][56][57][58][59][60].…”
Section: Methods Based On Template Matchingmentioning
confidence: 99%
“…Finally, there are also works exploring other types of posteriorgrams [107][108][109][110]. In recent years, the use of bottleneck features extracted from DNNs became popular [87,92,93,[111][112][113][114][115].…”
Section: Query-by-example Spoken Term Detectionmentioning
confidence: 99%