Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-2090
|View full text |Cite
|
Sign up to set email alerts
|

UNRAVEL—A Decipherment Toolkit

Abstract: In this paper we present the UNRAVEL toolkit: It implements many of the recently published works on decipherment, including decipherment for deterministic ciphers like e.g. the ZODIAC-408 cipher and Part two of the BEALE ciphers, as well as decipherment of probabilistic ciphers and unsupervised training for machine translation. It also includes data and example configuration files so that the previously published experiments are easy to reproduce.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…UNRAVEL (Nuhn et al, 2015) searches for a mapping of letters that maximize the probability of the decipherment under an n-gram character language model. Partial key mappings are structured into a search tree, and a beam search is used to traverse the tree and find the most promising candidates.…”
Section: Computational Deciphermentmentioning
confidence: 99%
“…UNRAVEL (Nuhn et al, 2015) searches for a mapping of letters that maximize the probability of the decipherment under an n-gram character language model. Partial key mappings are structured into a search tree, and a beam search is used to traverse the tree and find the most promising candidates.…”
Section: Computational Deciphermentmentioning
confidence: 99%
“…Nuhn and colleagues (Nuhn, Schamper, and Ney 2013;Nuhn and Ney 2014;Nuhn, Schamper, and Ney 2015) showed that beam search can significantly improve the speed of EM-based decipherment, while providing comparable or even slightly better accuracy. Beam search prunes less-promising latent states by maintaining two constantsized beams, one for the translation probabilities p( f |e) and one for the target bigram probabilities p(e 1 e 2 )-reducing the computational complexity to O(N F ).…”
Section: Beam Searchmentioning
confidence: 99%
“…For further speedup, we applied perposition pruning with histogram size 50 and the preselection method of Nuhn and Ney (2014) with lexical beam size 5 and LM beam size 50. All our experiments were carried out with the UNRAVEL toolkit (Nuhn et al, 2015). Table 4 summarizes the results.…”
Section: Large Vocabulary Experimentsmentioning
confidence: 99%