Useful automatized translation must be considered in a problem-solving setting, composed of a linguistic environment and a computer environment. We examine the facets of the problem which we believe to be essential, and try to give some paradigms along each of them. Those facets are the linguistic strategy, the programming tools, the treatment of semantics, the computer environment and the types of implementation. IntroductionMachine Translation has been a recurring theme ~n applied linguistics and computer science since the early fifties.Having not yet attained the enviable status of a science, it is best considered as an art in the same way as Knuth considers computer programming. Failure to recognize that MT must be treated in a problemsolving setting, that is, as a class of problems to be solved in various environments and according to various quality and cost criteria, has led and still leads to impassionate, antiscientific attitudes, ranging polemically between dreamy optimism and somber pessimism. Using the fairly large body of experience gained since the beginning of MT research, we try in this paper to extract the most essential facets of the problem and to propose some paradigms, alongside each of those facets, for usable computer systems which should appear in the near -or middle -term future.As a matter of fact, the phrase 'Machine Translation" is nowadays misleading and inadequate. We shall replace it by the more appropriate term "Automatized Translation" (of natural languages) and abbreviate it to AT. Part I tries to outline the problem situations in which AT can be considered. The following parts examine the different facets in turn. Part II is concerned with the linguistic strategy, Part III with the programming tools, Part IV with semantics, Part V with the computer environment and Part VI with possible types of implementation.I -Applicability, quality and cost : a problem situation. The pastAutomatized translation systems were first envisaged and developed for information gathering purposes.The output was used by specialists to scan through a mass of documents, and, as RADC user report shows [49], the users were quite satisfied. This is no more the case with the growing need for the diffusion of information. Here, the final output must be a good translation. Second generation systems were designed with this goal in mind, and with the assumption that good enough translations cannot nOW be obtained automatically on a large scale, but for very restricted domains (see METEO). Hence, a realistic strategy is to try to automate as much as possible of the translation proc~s. This is the approach taken by GETA, TAUM, LOGOS, PROVOH and many others. Here, the problem is to answer existing needs by letting man and machine work together. Another approach comes from AI and is best exemplified in [9]. Here, the goal is more theoretical : how to simulate a human producing competent translations ? We will argue that the methods developed in this framework are not yet candidates for immediate applicabili...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.