2020
DOI: 10.48550/arxiv.2004.14675
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-End Neural Word Alignment Outperforms GIZA++

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…GIZA++ [17], [18] is an implementation of IBM models. We used five iterations each for Model 1, the model, Model 3, and Model 4 to train GIZA++ by following the previous work of [19]. 3) AWE-SoME [20] is a neural word aligner based on multilingual BERT that can extract word alignments from contextualized word embeddings with and without fine-tuning on parallel data.…”
Section: ) Word Alignment Modelsmentioning
confidence: 99%
“…GIZA++ [17], [18] is an implementation of IBM models. We used five iterations each for Model 1, the model, Model 3, and Model 4 to train GIZA++ by following the previous work of [19]. 3) AWE-SoME [20] is a neural word aligner based on multilingual BERT that can extract word alignments from contextualized word embeddings with and without fine-tuning on parallel data.…”
Section: ) Word Alignment Modelsmentioning
confidence: 99%