2000
DOI: 10.1109/5.880087
|View full text |Cite
|
Sign up to set email alerts
|

Speech and language technologies for audio indexing and retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
62
0

Year Published

2002
2002
2011
2011

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(64 citation statements)
references
References 21 publications
0
62
0
Order By: Relevance
“…al., 1996b) to more recent systems based on large scale broadcast news transcription systems working in conjunction with information retrieval systems (Thong et. al., 2000, Makhoul et. al., 2000.…”
Section: Spoken Document Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…al., 1996b) to more recent systems based on large scale broadcast news transcription systems working in conjunction with information retrieval systems (Thong et. al., 2000, Makhoul et. al., 2000.…”
Section: Spoken Document Retrievalmentioning
confidence: 99%
“…al., 1999), SpeechBot of Compaq (Thong et. al., 2000 andCompaq, 2000), Rough'n Ready of BBN (Makhoul et. al., 2000), and Multimedia Document Retrieval project of Cambridge University Tuerk et.…”
Section: Spoken Document Retrievalmentioning
confidence: 99%
“…Current approaches have significant shortcomings. Most methods are either rule-based [5], or require significant amounts of manually labeled training data to achieve a reasonable level of performance [4]. The methods may identify a name, company, or location, but this is only a small part of the information that should be extracted; we would like to know further, for example, that a particular person is a politician and that a location is a vacation resort.…”
Section: Named Entity Extractionmentioning
confidence: 99%
“…While rule-based systems suffer significant degradations in going from mixed case to this style of text, hidden Markov model (HMM) approaches have proven to be more robust, suffering only a degradation of 4% to 5% in F-measure [4]. We will implement a variation of the HMM approach in which the output distributions are exponential models that weight various features of the words (numbers, titles, etc.…”
Section: Named Entity Extractionmentioning
confidence: 99%
“…The real-time audio processing in the eViTAP system is performed by the BBN AudioIndexer system, described in detail in (Makhoul et al 2000). The AudioIndexer system provides a wide range of real-time audio processing components, including automatic speech recognition, speaker segmentation and identification, topic classification, and named entity detection.…”
Section: Real-time Spoken Language Processingmentioning
confidence: 99%