2020
DOI: 10.48550/arxiv.2005.01107
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simplifying Paragraph-level Question Generation via Transformer Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…During generation, we limit all outputs to a maximum sequence length of 100, preemptively terminating generation if it begins to exceed this maximum length. We do not use sampling during translation, nor increase the temperature parameter as this induces randomness (Lopez et al, 2020).…”
Section: Translationmentioning
confidence: 99%
“…During generation, we limit all outputs to a maximum sequence length of 100, preemptively terminating generation if it begins to exceed this maximum length. We do not use sampling during translation, nor increase the temperature parameter as this induces randomness (Lopez et al, 2020).…”
Section: Translationmentioning
confidence: 99%