Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.297
|View full text |Cite
|
Sign up to set email alerts
|

Case Study: Deontological Ethics in NLP

Abstract: Recent work in natural language processing (NLP) has focused on ethical challenges such as understanding and mitigating bias in data and algorithms; identifying objectionable content like hate speech, stereotypes and offensive language; and building frameworks for better system design and data handling practices. However, there has been little discussion about the ethical foundations that underlie these efforts. In this work, we study one ethical theory, namely deontological ethics, from the perspective of NLP… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 79 publications
0
9
0
Order By: Relevance
“…And since morally informed NLP systems perpetuate patterns that are present in their training data (i.e., data on huge troves of moral judgments made by people), they represent a descriptive approach to ethics. These morally informed AI systems do not derive their reports from a particular ethical theory's framework or moral axioms in a prescriptive manner [13], but instead reflect empirically observed patterns of judgments. Whenever this approach fails to encode the "right" patterns (as assessed by the AI system developers), prescriptive approaches are harnessed to correct crowdsourced data.…”
Section: Morally Informed Ai Systemsmentioning
confidence: 99%
“…And since morally informed NLP systems perpetuate patterns that are present in their training data (i.e., data on huge troves of moral judgments made by people), they represent a descriptive approach to ethics. These morally informed AI systems do not derive their reports from a particular ethical theory's framework or moral axioms in a prescriptive manner [13], but instead reflect empirically observed patterns of judgments. Whenever this approach fails to encode the "right" patterns (as assessed by the AI system developers), prescriptive approaches are harnessed to correct crowdsourced data.…”
Section: Morally Informed Ai Systemsmentioning
confidence: 99%
“…There is a long-standing interest in the moral responsibility of AI (Dehghani et al, 2008;Alaieri and Vellino, 2016;Stephanidis et al, 2019;Zoshak and Dew, 2021;Prabhumoye et al, 2021;Schramowski et al, 2021). Work in Human-Computer Interaction (HCI) reveals that, before users feel they can trust a Conversational Agent, they will often probe it to identify the limitations which bound its abilities, competence (Luger and Sellen, 2016), and apparent integrity (Mayer et al, 1995;McKnight et al, 2002;Wang and Benbasat, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Hendrycks et al (2021) argue that works on fairness, safety, prosocial behavior, and utility of machine learning systems in fact address parts of broader theories in normative ethics, such as the concept of justice, deontological ethics, virtue ethics, and utilitarianism. Card and Smith (2020) and Prabhumoye et al (2021) show how NLP research and applications can be grounded in established ethical theories. Ziems et al (2022) presents a corpus annotated for moral "rules-of-thumb" to help explain why a chatbot's reply may be considered problematic under various moral assumptions.…”
Section: Ethics In Machine Learning and Nlpmentioning
confidence: 99%