2015
DOI: 10.1177/0018720815587803
|View full text |Cite
|
Sign up to set email alerts
|

Influencing Trust for Human–Automation Collaborative Scheduling of Multiple Unmanned Vehicles

Abstract: These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
12
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 54 publications
0
12
0
Order By: Relevance
“…For example, a study by Dzindolet, Pierce, Beck, and Dawe (2002) showed that an automated system for visual detection that was endorsed by a university was seen as more trustworthy than humans for the same task. Similar endorsement by experts has also been shown to enhance trust in automated decision systems in studies of unmanned vehicles (Clare, Cummings, & Repenning, 2015) and route planners (DeVries & Midden, 2008).…”
Section: Findings Suggesting the Implicit Association Of Endorsed/legmentioning
confidence: 84%
See 1 more Smart Citation
“…For example, a study by Dzindolet, Pierce, Beck, and Dawe (2002) showed that an automated system for visual detection that was endorsed by a university was seen as more trustworthy than humans for the same task. Similar endorsement by experts has also been shown to enhance trust in automated decision systems in studies of unmanned vehicles (Clare, Cummings, & Repenning, 2015) and route planners (DeVries & Midden, 2008).…”
Section: Findings Suggesting the Implicit Association Of Endorsed/legmentioning
confidence: 84%
“…Relatedly, research on the technology acceptance model (Davis, Bagozzi, & Warshaw, 1989) has shown that perceived value of new information technology is enhanced when legitimate others endorse that technology (Agarwal & Prasad, 1997;Clare et al, 2015;DeVries & Midden, 2008;Macedo, 2017;Ranjan & Athalye, 2009;Venkatesh & Davis, 2000;Wang & Benbasat, 2005). For example, in a field study involving adoption of a new automated scheduling system, Venkatesh and Davis (2000, p. 201) found that those who perceived that "others who are important to me think that I should use the system" were more likely to perceive the new information technology as valuable and accept it.…”
Section: Findings Suggesting the Implicit Association Of Endorsed/legmentioning
confidence: 99%
“…Furthermore, the challenge of incorporating automation in one vehicle is replaced by the need to keep the human “in the loop” of the activities for all vehicles (Ruff et al, 2002 ). Careful system design can mitigate performance costs and can be achieved by: allowing flexibility in the design of function allocation (i.e., which tasks will be performed by the human and which will be performed by the system), the level of automation to be implemented within each function (Parasuraman et al, 2000 ; Chen et al, 2013 ; Gu et al, 2014 ), and the operators' level of trust in the automation (Clare et al, 2015 ). Eventually, when flight control becomes fully automated, operators will manipulate the payloads rather than fly the vehicles (e.g., Cooper and Goodrich, 2008 ).…”
Section: Introductionmentioning
confidence: 99%
“…The ability to characterize a human's spatial mental model, or an IA's underlying reasoning process, can be used to communicate that information among team members to better understand and predict teammates' behaviors. To address these current limitations, our current research investigates human and IA approaches to solving route planning problems to (1) quantify the similarity among human and IA planned routes, (2) test the dynamics of replanning in a mixed-initiative system, (3) understand the role of trust in those dynamics, and (4) develop a data-driven methodology for inferring the characteristics of spatial mental models from the routes.…”
mentioning
confidence: 99%