Proceedings of the 16th International Natural Language Generation Conference 2023
DOI: 10.18653/v1/2023.inlg-main.24
|View full text |Cite
|
Sign up to set email alerts
|

Trustworthiness of Children Stories Generated by Large Language Models

Prabin Bhandari,
Hannah Brennan

Abstract: Large Language Models (LLMs) have shown a tremendous capacity for generating literary text. However, their effectiveness in generating children's stories has yet to be thoroughly examined. In this study, we evaluate the trustworthiness of children's stories generated by LLMs using various measures, and we compare and contrast our results with both old and new children's stories to better assess their significance. Our findings suggest that LLMs still struggle to generate children's stories at the level of qual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 11 publications
0
0
0
Order By: Relevance
“…For instance, in tasks where accuracy is paramount, such as summarization, translation, or message generation for a healthcare context, more restrictive decoding methods are often employed (e.g., k=2, as in [ 28 ]). Conversely, tasks prioritizing diversity, like creative writing, advertising copywriting, or storytelling, may benefit from more flexible decoding methods (e.g., k=100, as cited in [ 10 ]). Therefore, we suggest customizing decoding methods to align with the specific requirements of the NLP task in future studies.…”
Section: General Discussion and Limitationmentioning
confidence: 99%
“…For instance, in tasks where accuracy is paramount, such as summarization, translation, or message generation for a healthcare context, more restrictive decoding methods are often employed (e.g., k=2, as in [ 28 ]). Conversely, tasks prioritizing diversity, like creative writing, advertising copywriting, or storytelling, may benefit from more flexible decoding methods (e.g., k=100, as cited in [ 10 ]). Therefore, we suggest customizing decoding methods to align with the specific requirements of the NLP task in future studies.…”
Section: General Discussion and Limitationmentioning
confidence: 99%