Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022) 2022
DOI: 10.18653/v1/2022.iwslt-1.4
|View full text |Cite
|
Sign up to set email alerts
|

Locality-Sensitive Hashing for Long Context Neural Machine Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…In general, the papers discussed so far have looked at evaluating or improving performance of LLMs as translation systems. Petrick et al (2023) explore ways of fusing document-level language models with NMT systems, including LLMs. This provides an example of ways we might see knowledge from LLMs incorporated into dedicated MT systems, an avenue that could be explored in parallel to that of treating LLMs as MT systems.…”
Section: Llms For Translationmentioning
confidence: 99%
“…In general, the papers discussed so far have looked at evaluating or improving performance of LLMs as translation systems. Petrick et al (2023) explore ways of fusing document-level language models with NMT systems, including LLMs. This provides an example of ways we might see knowledge from LLMs incorporated into dedicated MT systems, an avenue that could be explored in parallel to that of treating LLMs as MT systems.…”
Section: Llms For Translationmentioning
confidence: 99%
“…• LSH-trans (Petrick et al, 2022) 1 is based on Reformer and uses locality sensitive hashing to obtain a cluster of tokens to be attended to each other within it.…”
Section: Comparison Workmentioning
confidence: 99%
“…On the other hand, the studies that target on efficient sequence-to-sequence generation only verify their methods on normal sentence-level translation benchmarks like WMT EN-DE test sets (Peng et al, 2021;Petrick et al, 2022;Ma et al, 2021). In our preliminary experiments, we find that almost all the work severely drops in BLEU when dealing with real document translation tasks.…”
Section: Introductionmentioning
confidence: 99%