REPORT DOCUMENTATION PAGE
Form Approved OMB No. 0704-0188Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.
AGENCY USE ONLY (Leave blank)2. REPORT DATE
February 1998
REPORT TYPE AND DATES COVEREDInterim Report: April 1996 to February 1997
TITLE AND SUBTITLE
Foundations for an Empirically Determined Scale of Trust in Automated Systems
AUTHOR(S)Jiun-Yin Jian, Ann M. Bisantz, Colin G. Drury, James Llinas
PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)Center
AFRL-HE-WP-TR-2000-0102
SUPPLEMENTARY NOTES 12a. DISTRIBUTION AVAILABILITY STATEMENTApproved for public release; distribution is unlimited.
12b. DISTRIBUTION CODE
ABSTRACT (Maximum 200 words)One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
REPORT DOCUMENTATION PAGE
Form Approved OMB No. 0704-0188Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.
AGENCY USE ONLY (Leave blank)2. REPORT DATE
February 1998
REPORT TYPE AND DATES COVEREDInterim Report: April 1996 to February 1997
TITLE AND SUBTITLE
Foundations for an Empirically Determined Scale of Trust in Automated Systems
AUTHOR(S)Jiun-Yin Jian, Ann M. Bisantz, Colin G. Drury, James Llinas
PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)Center
AFRL-HE-WP-TR-2000-0102
SUPPLEMENTARY NOTES 12a. DISTRIBUTION AVAILABILITY STATEMENTApproved for public release; distribution is unlimited.
12b. DISTRIBUTION CODE
ABSTRACT (Maximum 200 words)One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Understanding how to display e ectively uncertain information has become increasingly important as decision aids can provide operators with situational estimates and their associated uncertainty. The paper describes two studies in which degraded or blended icons were used to convey uncertainty regarding the identity of a radar contact as hostile or friendly. A classi®cation study ®rst showed that participants could sort, order and rank icons from ®ve sets intended to represent di erent levels of uncertainty. Three icon sets were selected for further study in an experiment in which participants had to identify the status of contacts as either hostile or friendly. Contacts and probabilistic estimates of their identities were depicted on a simulated radar screen in one of three ways: with degraded icons and probabilities, with non-degraded icons and probabilities and with degraded icons only. Results showed that participants using displays with only degraded icons performed better on some measures and as well on other measures, than the other tested conditions. These results are signi®cant because they indicate both that people can understand uncertainty conveyed through such a manner and thus that the use of distorted or degraded images may be a viable alternative to convey situational uncertainty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.