2013
DOI: 10.1186/1687-4722-2013-23
|View full text |Cite
|
Sign up to set email alerts
|

Query-by-Example Spoken Term Detection ALBAYZIN 2012 evaluation: overview, systems, results, and discussion

Abstract: Query-by-Example Spoken Term Detection (QbE STD) aims at retrieving data from a speech data repository given an acoustic query containing the term of interest as input. Nowadays, it has been receiving much interest due to the high volume of information stored in audio or audiovisual format. QbE STD differs from automatic speech recognition (ASR) and keyword spotting (KWS)/spoken term detection (STD) since ASR is interested in all the terms/words that appear in the speech signal and KWS/STD relies on a textual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…These evaluation campaigns provide an objective mechanism to compare different systems and are a powerful way to promote research on different speech technologies [56][57][58][59][60][61][62][63].…”
Section: Motivation and Organization Of This Papermentioning
confidence: 99%
“…These evaluation campaigns provide an objective mechanism to compare different systems and are a powerful way to promote research on different speech technologies [56][57][58][59][60][61][62][63].…”
Section: Motivation and Organization Of This Papermentioning
confidence: 99%
“…In the first edition (held in 2012), we organized three different tasks, named query-by-example spoken term detection, keyword spotting, and spoken term detection. In the first evaluation, however, most participants only submitted systems to the query-by-example STD evaluation [37] and only one participant submitted a system for STD and keyword spotting tasks. In this second edition (held in 2014), there was an additional task, named query-by-example spoken document retrieval.…”
Section: Lessons Learnedmentioning
confidence: 99%
“…This campaign is an internationally open set of evaluations supported by the Spanish Network of Speech Technologies (RTTH [32]) and the ISCA Special Interest Group on Iberian Languages (SIG-IL [33]), which have been held every 2 years since 2006. The evaluation campaigns provide an objective mechanism to compare different systems and are a powerful way to promote research on different speech technologies (e.g., speech segmentation [34], speaker diarization [35], language recognition [36], query-by-example spoken term detection [37], and speech synthesis [38] in the ALBAYZIN 2010 and 2012 evaluation campaigns). This year, this campaign has been held during the IberSPEECH 2014 conference [39].…”
Section: Introductionmentioning
confidence: 99%
“…In 2012, the first QbE STD evaluation dealing with Spanish was organized in the context of the ALBAYZIN 2012 Evaluation campaign. The success of this first evaluation [44] encouraged us to organize a new QbE STD evaluation for the ALBAYZIN 2014 Evaluation campaign aiming at evaluating the progress in this technology for Spanish. The second ALBAYZIN QbE STD evaluation incorporates new and more difficult queries (i.e., multi-word and foreign queries).…”
Section: Introductionmentioning
confidence: 99%
“…These campaigns are internationally open sets of evaluations supported by the Spanish Network of Speech Technologies (RTTH) 1 and the ISCA Special Interest Group on Iberian Languages (SIG-IL) 2 held every 2 years from 2006. The evaluation campaigns provide an objective mechanism to compare different systems and promote research in different speech technologies such as audio segmentation [41], speaker diarization [42], language recognition [43], query-by-example spoken term detection [44], and speech synthesis [45].…”
Section: Introductionmentioning
confidence: 99%