2023
DOI: 10.48550/arxiv.2301.13294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Machine Translation with Large Language Models

Abstract: Consistency is a key requirement of highquality translation. It is especially important to adhere to pre-approved terminology and corrected translations in domain-specific projects.Machine translation (MT) has achieved significant progress in the area of domain adaptation. However, real-time adaptation remains challenging. Large-scale language models (LLMs) have recently shown interesting capabilities of in-context learning, where they learn to replicate certain input-output text generation patterns, without f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(23 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…In all the above discussed research works, the performances of GLLMs are just satisfactory but not on par or beyond the performances of commercial machine translation systems. Some of the research works [198]- [201], [204], [205], [207], [208] showed that it is possible to outperform commercial machine translation systems using GLLMs. For example, Jiao et al [198] investigated the translation capabilities of GLLMs like ChatGPT and GPT-4 and compared the performance with commercial systems like Google Translate, DeepL Translate and Tencent TranSmart.…”
Section: Machine Translationmentioning
confidence: 99%
“…In all the above discussed research works, the performances of GLLMs are just satisfactory but not on par or beyond the performances of commercial machine translation systems. Some of the research works [198]- [201], [204], [205], [207], [208] showed that it is possible to outperform commercial machine translation systems using GLLMs. For example, Jiao et al [198] investigated the translation capabilities of GLLMs like ChatGPT and GPT-4 and compared the performance with commercial systems like Google Translate, DeepL Translate and Tencent TranSmart.…”
Section: Machine Translationmentioning
confidence: 99%
“…These capabilities have led to LLMs exhibiting a certain degree of human-level intelligence, particularly in the areas of language understanding and generation Bubeck et al, 2023;Wu et al, 2023;Moghaddam and Honey, 2023). Among numerous tasks, translation has emerged as a prominent area where LLMs have shown impressive capacity and competence (Jiao et al, 2023b;Agrawal et al, 2023;Zhang et al, 2023a;Vilar et al, 2022;Moslem et al, 2023;Pilault et al, 2023;Hendy et al, 2023;Zhu et al, 2023b;Jiao et al, 2023a;Karpinska and Iyyer, 2023;Peng et al, 2023;Bawden and Yvon, 2023;. This progress above harkens back to the long-term aspirations and dreams of earlier machine translation research in the 1960s (Bar-Hillel, 1960;Macklovitch, 1995): Can LLMs employ a translation process similar to human translators?…”
Section: Introductionmentioning
confidence: 99%
“…Large-scale language models (LLMs) (Brown et al, 2020;Smith et al, 2022;Du et al, 2022;Rae et al, 2021;Thoppilan et al, 2022;Hoffmann et al, 2022;Chowdhery et al, 2022;Touvron et al, 2023) have shown an impressive ability for in-context learning: with only a few task-specific examples as demonstrations, LLMs are able to generate results for a new test input. Under the framework of in-context learning, LLMs have achieved promising results in a variety of NLP tasks, include machine translation (MT) (Vilar et al, 2022;Vidal et al, 2022;Moslem et al, 2023), question answering (QA) (Robinson et al, 2022;Lazaridou et al, 2022) and named entity extraction (NEE) (Chowdhery et al, 2022;Brown et al, 2020).…”
Section: Introductionmentioning
confidence: 99%