2019
DOI: 10.1007/978-3-030-17294-7_3
|View full text |Cite
|
Sign up to set email alerts
|

Accountability for Practical Reasoning Agents

Abstract: Artificial intelligence has been increasing the autonomy of man-made artefacts such as software agents, self-driving vehicles and military drones. This increase in autonomy together with the ubiquity and impact of such artefacts in our daily lives have raised many concerns in society. Initiatives such as transparent and ethical AI aim to allay fears of a "free for all" future where amoral technology (or technology amorally designed) will replace humans with terrible consequences. We discuss the notion of accou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…Last but not least, explainability (EX), i.e., the capability of a model to provide an explanation of its processes [9], is desirable in any autonomous system for reasons of trustworthiness [18], accountability [19], and responsibility [20]; this is particularly crucial in the MPC context for allowing users to know why a solution is suggested and its effects [8], and to align the differences between uploaders and co-owners [5].…”
Section: Requirements For Mpc Solutionsmentioning
confidence: 99%
“…Last but not least, explainability (EX), i.e., the capability of a model to provide an explanation of its processes [9], is desirable in any autonomous system for reasons of trustworthiness [18], accountability [19], and responsibility [20]; this is particularly crucial in the MPC context for allowing users to know why a solution is suggested and its effects [8], and to align the differences between uploaders and co-owners [5].…”
Section: Requirements For Mpc Solutionsmentioning
confidence: 99%
“…In [17], Cranefield et al discuss the notion of accountable autonomy within the context of practical reasoning agents. The authors start by listing some requirements that an agent should satisfy to be capable of performing practical reasoning on accountability.…”
Section: Related Workmentioning
confidence: 99%
“…An experience questionnaire (extracted and modified from GEQ [14]) was given to each participant at the end of each round. We clustered questions in three main groups (GEQ indices in brackets): Competence (10,15,17,21); Affect (9,22,24); and Challenge (23,26,33). We included four custom questions to evaluate game-specific criteria, such as how often they consulted the text rules and if they anticipated/agreed with agents' actions.…”
Section: User Studymentioning
confidence: 99%
“…Their approach is formally rigorous but is specialised for a specific set of traffic rules only and does not generalise beyond. Cranefield et al [9] propose that ideal accountable agents must: i) understand what is expected from them (from rules/obligations); ii) answer queries about their decision-making (being explainable); iii) carry out argumentative dialogues in which beliefs and plans are challenged and justified; iv) adapt their reasoning apparatuses or update their plans as a result of accountability dialogues; and v) take human values into account when reasoning.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation