Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing 2014
DOI: 10.3115/v1/w14-6808
|View full text |Cite
|
Sign up to set email alerts
|

Problematic Situation Analysis and Automatic Recognition for Chinese Online Conversational System

Abstract: Automatic problematic situation recognition (PSR) is important for an online conversational system to constantly improve its performance. A PSR module is responsible of automatically identifying users' un-satisfactions and then sending feedbacks to conversation managers. In this paper, we collect dialogues from a Chinese online chatbot, annotate the problematic situations and propose a framework to predict utterance-level problematic situations by integrating intent and sentiment factors. Different from previo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Without using references, in [131], turn-level coherence is annotated for individual bot utterances and a binary classification model is built using the coherence label and a set of hand-crafted features including dialog acts, question types, predicate-argument structure, named entities, and dependency parsing structure. A model is developed in [132] by training on human annotations of problematic turns to automatically recognize such turns using intent and sentiment features. Using the annotations in the WOCHAT datasets [117], several binary classification models are compared in [133] for estimating whether a bot utterance is valid.…”
Section: Dialog System Evaluation Approachesmentioning
confidence: 99%
“…Without using references, in [131], turn-level coherence is annotated for individual bot utterances and a binary classification model is built using the coherence label and a set of hand-crafted features including dialog acts, question types, predicate-argument structure, named entities, and dependency parsing structure. A model is developed in [132] by training on human annotations of problematic turns to automatically recognize such turns using intent and sentiment features. Using the annotations in the WOCHAT datasets [117], several binary classification models are compared in [133] for estimating whether a bot utterance is valid.…”
Section: Dialog System Evaluation Approachesmentioning
confidence: 99%
“…(Gandhe and Traum, 2016) propose a semi-automatic evaluation metric for dialogue coherence, similar to BLEU and ROUGE, based on 'wizard of Oz' type data. 6 (Xiang et al, 2014) propose a framework to predict utterance-level problematic situations in a dataset of Chinese dialogues using intent and sentiment factors. Finally, (Higashinaka et al, 2014) train a classifier to distinguish user utterances from system-generated utterances using various dialogue features, such as dialogue acts, question types, and predicate-argument structures.…”
Section: Related Workmentioning
confidence: 99%
“…To this end, we have released the data 1 to the public so that researchers in the field can test their ideas for detecting breakdowns. Although there have been approaches to detecting errors in open-domain conversation, the reported accuracies are not that high (Xiang et al, 2014;Higashinaka et al, 2014b). We believe our taxonomy will be helpful for conceptualizing the errors, and the forthcoming challenge will encourage a more detailed analysis of errors in chat-oriented dialogue systems.…”
Section: Summary and Future Workmentioning
confidence: 99%