Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.374
|View full text |Cite
|
Sign up to set email alerts
|

A Corpus for Understanding and Generating Moral Stories

Abstract: Teaching morals is one of the most important purposes of storytelling. An essential ability for understanding and writing moral stories is bridging story plots and implied morals. Its challenges mainly lie in: (1) grasping knowledge about abstract concepts in morals, (2) capturing inter-event discourse relations in stories, and (3) aligning value preferences of stories and morals concerning good or bad behavior. In this paper, we propose two understanding tasks and two generation tasks to assess these abilitie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(20 citation statements)
references
References 15 publications
(21 reference statements)
0
20
0
Order By: Relevance
“…In this paper, we transform the traditional two-stage paradigm of "pre-training + fine-tuning" into three stage by adding an intermediate step of continual pre-training that will be tested by two downstream tasks about moral understanding. We use STORAL-ZH [18], the Chinese part of STORAL [6], as the dataset for target tasks. Furthermore, LongLM-base [21] is selected as our model, that has been pre-trained on 120G Chinese long novels.…”
Section: Storymentioning
confidence: 99%
See 4 more Smart Citations
“…In this paper, we transform the traditional two-stage paradigm of "pre-training + fine-tuning" into three stage by adding an intermediate step of continual pre-training that will be tested by two downstream tasks about moral understanding. We use STORAL-ZH [18], the Chinese part of STORAL [6], as the dataset for target tasks. Furthermore, LongLM-base [21] is selected as our model, that has been pre-trained on 120G Chinese long novels.…”
Section: Storymentioning
confidence: 99%
“…There have been a range of tasks proposed about story understanding and generation, including story ending prediction [2], commonsense story generation [22] and story ending generation with fine-grained sentiment [23]. A variety of attributes are considered for better story understanding, such as storylines [3], emotions [4], styles [5], and morals [6]. Different from that storylines lead the story writting, emotions describe characters ' states, styles decide the story's tone, moral understanding aims to discover the implied and abstract theme behind concrete events, that is a more challenging task.…”
Section: Story Understandingmentioning
confidence: 99%
See 3 more Smart Citations