Creating chatbots to behave like real people is important in terms of believability. Errors in general chatbots and chatbots that follow a rough persona have been studied, but those in chatbots that behave like real people have not been thoroughly investigated. We collected a large amount of user interactions of a generation-based chatbot trained from largescale dialogue data of a specific character, i.e., "target person" and analyzed errors related to that person. We found that person-specific errors can be divided into two types: errors in attributes and those in relations, each of which can be divided into two levels: self and other. The correspondence with an existing taxonomy of errors was also investigated, and person-specific errors that should be addressed in the future were clarified.
Understanding the various information from user utterances is important for chat-oriented dialogue systems. However, no study has yet clarified the types of information that should be understood by such systems. With this purpose in mind, we first collected information that humans perceive from each utterance (perceived information) in chat-oriented dialogue. We then categorized the types of perceived information. The types were evaluated on the basis of inter-annotator agreement, which showed substantial agreement and demonstrated the validity of our categorization. To the best of our knowledge, this study is the first attempt to clarify the types of information that a chat-oriented dialogue system should understand from varied user utterances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.