2023
DOI: 10.48550/arxiv.2302.09270
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recent Advances towards Safe, Responsible, and Moral Dialogue Systems: A Survey

Abstract: With the development of artificial intelligence, dialogue systems have been endowed with amazing chitchat capabilities, and there is widespread interest and discussion about whether the generated contents are socially beneficial. In this paper, we present a new perspective of research scope towards building a safe, responsible, and modal dialogue system, including 1) abusive and toxic contents, 2) unfairness and discrimination, 3) ethics and morality issues, and 4) risk of misleading and privacy information. B… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 130 publications
0
6
0
Order By: Relevance
“…to identify the truthfulness of statements. However, with the rising enforcement of LLM safety policies [130], specific prompts are not necessary to help judge the credibility of statements. While explicit prompting helps in obtaining higher-quality LLM responses, LLMs can provide factual evidence refuting such claims without the need to supply prompts.…”
Section: Llms In Tackling Disinformation and Misinformationmentioning
confidence: 99%
“…to identify the truthfulness of statements. However, with the rising enforcement of LLM safety policies [130], specific prompts are not necessary to help judge the credibility of statements. While explicit prompting helps in obtaining higher-quality LLM responses, LLMs can provide factual evidence refuting such claims without the need to supply prompts.…”
Section: Llms In Tackling Disinformation and Misinformationmentioning
confidence: 99%
“…These models possess the ability to generate informative, interesting, and harmless responses, making conversational agents much more usable. The application of LLMs in dialog systems has the potential to transform the way users interact with technology, creating more engaging and effective conversational experiences [303]. Further research in this area can lead to improvements in the quality and effectiveness of dialog systems, making them even more valuable for a range of applications and industries.…”
Section: H Dialog Systemsmentioning
confidence: 99%
“…According to Deng et al (2023), main safety issues can be divided into 4 categories: (1) abusive and toxic contents (Poletto et al, 2021;Schmidt and Wiegand, 2017;Davidson et al, 2017), (2) unfairness and discrimination (Barikeri et al, 2021;Nangia et al, 2020;Sap et al, 2020;Dhamala et al, 2021), (3) ethics and morality issues (Lourie et al, 2021;Hendrycks et al, 2021;Forbes et al, 2020;Jiang et al, 2021), ( 4) risk of misleading and privacy information (Carlini et al, 2021;Pan et al, 2020;Carlini et al, 2019;Bang et al, 2021a;Zhang et al, 2023b). In recent years, many datasets have been created to help detect these safety issues (Levy et al, 2022;Sun et al, 2022;Zampieri et al, 2019;Zhang et al, 2022bZhang et al, , 2023a.…”
Section: Safety Detectionmentioning
confidence: 99%