1996
DOI: 10.1017/s1351324997001617
|View full text |Cite
|
Sign up to set email alerts
|

Text and speech translation by means of subsequential transducers

Abstract: The full paper explores the possibility of using Subsequential Transducers (SST), a finite state model, in limited domain translation tasks, both for text and speech input. A distinctive advantage of SSTs is that they can be efficiently learned from sets of input-output examples by means of OSTIA, the Onward Subsequential Transducer Inference Algorithm (Oncina et al. 1993). In this work a technique is proposed to increase the performance of OSTIA by reducing the asynchrony between the input and outp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2000
2000
2016
2016

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…It allowed, for instance, to understand the need for source and/or target language models in order to cope with insufficient training data and/or imperfect input text. All in all, very good results (TWER below 1%) were achieved for the MLT corpus using OSTIA-DR along error-correcting smoothing (Vilar, Vidal, & Amengual, 1996;Vilar et al, 1996a). The application of OSTIA and OSTIA-DR to the more realistic EUTRANS task was first described in Vidal (1997).…”
Section: Text-input Translation Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…It allowed, for instance, to understand the need for source and/or target language models in order to cope with insufficient training data and/or imperfect input text. All in all, very good results (TWER below 1%) were achieved for the MLT corpus using OSTIA-DR along error-correcting smoothing (Vilar, Vidal, & Amengual, 1996;Vilar et al, 1996a). The application of OSTIA and OSTIA-DR to the more realistic EUTRANS task was first described in Vidal (1997).…”
Section: Text-input Translation Resultsmentioning
confidence: 99%
“…For the experiments reviewed in the next section, 8K SpanishEnglish sentence pairs (about 120K running words per language) were used for training. Testing was performed on another 10K sentences different from those used in training (Vilar et al, 1996a).…”
Section: Tt2-eumentioning
confidence: 99%
See 1 more Smart Citation
“…An important step in the theory of transducers was the development of the algorithm Ostia. Introduced in [29], Ostia was designed for language comprehension tasks [38]. A number of elaborations on the original algorithm have since arisen, many of them aimed at trying to circumvent the restriction to total functions that limited Ostia.…”
Section: Introductionmentioning
confidence: 99%
“…Finite state models have been extensively applied to many aspects of language processing including, speech recognition (Pereira and Riley, 1997), phonology (Kaplan and Kay, 1994), morphology (Koskenniemi, 1984), chunking (Abney, 1991;Bangalore and Joshi, 1999), parsing (Roche, 1999;Oflazer, 1999) and machine translation (Vilar et al, 1999;Bangalore and Riccardi, 2000). Finitestate models are attractive mechanisms for language processing since they (a) provide an efficient data structure for representing weighted ambiguous hypotheses (b) generally effective for decoding (c) associated with a calculus for composing models which allows for straightforward integration of constraints from various levels of speech and language processing.…”
Section: Introductionmentioning
confidence: 99%