2017
DOI: 10.1007/s10462-017-9560-8
|View full text |Cite
|
Sign up to set email alerts
|

Adjustable autonomy: a systematic literature review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 48 publications
(24 citation statements)
references
References 56 publications
0
24
0
Order By: Relevance
“…Research on AI autonomy is diverse and involves, for example, the autonomy of robots (Noorman and Johnson 2014), human-robot interactions (Goodrich and Schultz 2007), or the coordination of several autonomous agents (Yan et al 2013). Of particular concern in relation to this principle is research on trust in autonomous systems such as autonomous vehicles (Schaefer et al 2016;Stormont 2008), as well as research on adjustable autonomy, which refers to agents dynamically changing their autonomy and transferring it to other entities (Mostafa et al 2019). For organizations, this principle implies that they should, for example, consider implementing proper oversight mechanisms (e.g., keeping the human-in-the-loop) to ensure autonomy when embedding AI into their electronic services and products.…”
Section: Autonomymentioning
confidence: 99%
“…Research on AI autonomy is diverse and involves, for example, the autonomy of robots (Noorman and Johnson 2014), human-robot interactions (Goodrich and Schultz 2007), or the coordination of several autonomous agents (Yan et al 2013). Of particular concern in relation to this principle is research on trust in autonomous systems such as autonomous vehicles (Schaefer et al 2016;Stormont 2008), as well as research on adjustable autonomy, which refers to agents dynamically changing their autonomy and transferring it to other entities (Mostafa et al 2019). For organizations, this principle implies that they should, for example, consider implementing proper oversight mechanisms (e.g., keeping the human-in-the-loop) to ensure autonomy when embedding AI into their electronic services and products.…”
Section: Autonomymentioning
confidence: 99%
“…[104]. This method can be used to model human behaviors and their effects on others [105]. Social scientists have begun to convert social theories to computer programs [106].…”
Section: B Development Of a Configuration Framework To Generate An Ementioning
confidence: 99%
“…whereas an extensive survey is presented by Mostafa et al (2019). Bruemmer et al (2005), Leeper et al (2012), Chen et al (2013), and Muszynski et al (2012) propose teleoperation systems with different LOA for control of ground-based robots from classical egocentric and exocentric views.…”
Section: Levels Of Automation and Approaches For Controlmentioning
confidence: 99%