2022
DOI: 10.1609/aaai.v36i11.21494
|View full text |Cite
|
Sign up to set email alerts
|

Consent as a Foundation for Responsible Autonomy

Abstract: This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users. The contribution of this paper is twofold. First, it provides a conceptual analysis of consent, its benefits and misuses, and how… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Thus, the quality of the data employed in the value inference steps must be curated to guarantee that the process is fair and free of bias [26,38]. (5) Designing autonomous agents that align with their human users' values is an important step toward trustworthy AI [36,37]. To this end, the value inference processes must be legitimate [14], providing adequate channels for eliciting stakeholders' consent [37] and dissent [10].…”
Section: Research Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, the quality of the data employed in the value inference steps must be curated to guarantee that the process is fair and free of bias [26,38]. (5) Designing autonomous agents that align with their human users' values is an important step toward trustworthy AI [36,37]. To this end, the value inference processes must be legitimate [14], providing adequate channels for eliciting stakeholders' consent [37] and dissent [10].…”
Section: Research Challengesmentioning
confidence: 99%
“…(5) Designing autonomous agents that align with their human users' values is an important step toward trustworthy AI [36,37]. To this end, the value inference processes must be legitimate [14], providing adequate channels for eliciting stakeholders' consent [37] and dissent [10].…”
Section: Research Challengesmentioning
confidence: 99%