2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9206897
|View full text |Cite
|
Sign up to set email alerts
|

How to Keep an Online Learning Chatbot From Being Corrupted

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…One-shot learning, on the other hand, focuses on learning from a single example [126]. This method is especially beneficial for personalizing interactions or quickly incorporating user-specific preferences and contexts into the chatbot's response framework.…”
Section: Few-shot Zero-shot and One-shot Learningmentioning
confidence: 99%
“…One-shot learning, on the other hand, focuses on learning from a single example [126]. This method is especially beneficial for personalizing interactions or quickly incorporating user-specific preferences and contexts into the chatbot's response framework.…”
Section: Few-shot Zero-shot and One-shot Learningmentioning
confidence: 99%
“…The sentences are categorized into four classes: normal sentences, insulting sentences, negative sentences about a different person, or sentences that may indicate a dangerous situation. Chai et al [241] developed an offensive-response dataset, which consists of 110K input-response chat records in which the response is either appropriate or offensive. These databases can assist in training CAs, allowing the CAs to identify different sensitive situations to respond accordingly.…”
Section: Datasets For Social Assistancementioning
confidence: 99%
“…With the widespread application of AI products, the abuse of AI technology without human values consideration has drawn wide concern in the last decade. Negative examples such as offensive remarks by conversational assistants [77] and DeepFake fraud cause losses to society and even harm the development of the AI industry and research. To alleviate this concern and build AI trustworthiness, it is necessary to incorporate alignment with human values over the whole lifecycle of an AI system, which requires efforts on not only product design but also AI technology development.…”
Section: Value Alignmentmentioning
confidence: 99%