2022
DOI: 10.1007/978-981-19-7960-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Mask Curriculum Learning for Non-Autoregressive Neural Machine Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Depending on the representation of molecules, the sequence can be a series of SMILES tokens or molecular edit actions on the molecule graph. By adopting SMILES representation, a popular paradigm of existing retrosynthesis methods is to formulate retrosynthesis prediction as a sequence‐to‐sequence translation problem 73 . This translation process is usually token‐by‐token and autoregressive, that is, based on the input product SMILES and the already decoded reactant SMILES, to obtain the next possible reactant token.…”
Section: Single‐step Retrosynthesis Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Depending on the representation of molecules, the sequence can be a series of SMILES tokens or molecular edit actions on the molecule graph. By adopting SMILES representation, a popular paradigm of existing retrosynthesis methods is to formulate retrosynthesis prediction as a sequence‐to‐sequence translation problem 73 . This translation process is usually token‐by‐token and autoregressive, that is, based on the input product SMILES and the already decoded reactant SMILES, to obtain the next possible reactant token.…”
Section: Single‐step Retrosynthesis Methodsmentioning
confidence: 99%
“…By adopting SMILES representation, a popular paradigm of existing retrosynthesis methods is to formulate retrosynthesis prediction as a sequence-to-sequence translation problem. 73 This translation process is usually token-by-token and autoregressive, that is, based on the input product SMILES and the already decoded reactant SMILES, to obtain the next possible reactant token. Researchers first create a token dictionary with the molecules in the training set.…”
Section: Template-free Generationmentioning
confidence: 99%
“…The conversion of a complex sentence to a simpler one, can be modelled as a translation task, making Neural Machine Translation (NMT) [20] a good fit for the problem. NMT with attention mechanisms, selectively focus on different parts of the input sentence dur-ing the translation process [12]. By doing so, the model can give appropriate emphasis to crucial elements, resulting in simplified sentences that preserve the original meaning.…”
Section: Ats Based On Neural Machine Translation (Nmt) With Attention...mentioning
confidence: 99%
“…The datasets comprise 200 summarized documents (100 for each court), addressing topics from different legal domains. The evaluated methods were: two unsupervised models, i.e., MUSS (English and Portuguese versions), and two supervised approaches, namely Transformers and NMT + Attention [10,11,12]. Readability metrics were employed in order to assess the quality of the produced, such as Flesch Reading Ease (FRE).…”
Section: Introductionmentioning
confidence: 99%
“…The purpose of validation data is to validate the model while in the training process. It will be simulated as a test set however in the training process [5].…”
Section: Data Preparationmentioning
confidence: 99%