Abstract:Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortun… Show more
“…Indeed, the increased interest in personal software assistants and other software agents [1,13,22] to automate routine tasks in offices, in auctions and e-commerce, at home or all spheres of daily activity has led to increased concern about privacy. While such software agents need to use private user information to conduct business on behalf of users, this wealth of private information in possession of software agents is a great area of concern for users.…”
It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users' privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents' knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena.
“…Indeed, the increased interest in personal software assistants and other software agents [1,13,22] to automate routine tasks in offices, in auctions and e-commerce, at home or all spheres of daily activity has led to increased concern about privacy. While such software agents need to use private user information to conduct business on behalf of users, this wealth of private information in possession of software agents is a great area of concern for users.…”
It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users' privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents' knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena.
“…Much less research has been published in the area of human-agent teaming although specific approaches have been explored, including interface agents, mixed-initiative systems and collaboration theory. Some research (Scerri et al, 2002;Tambe et al, 2000) has successfully adapted principles of agent-agent teamwork to human-agent interaction in various settings (Bradshaw et al, 2002a).…”
“…For a robot to simply transfer responsibility to a human in a fixed set of situations can lead to problems when the human is unavailable or lacks the skill or time to lend assistance [1]. To get around this, transfer-of-control strategies are needed to allow robots to choose when to seek human assistance perhaps even delaying [2] or waiting in a degraded state [3]. The neglect tolerance model (cf., [4]) offers a standard approach for describing human control of multiple unmanned vehicles (UVs) performing independent tasks.…”
This paper presents a queueing model that addresses robot self-assessment in human-robot-interaction systems. We build the model based on a game theoretic queueing approach, and analyze four issues: 1) individual differences in operator skills/capabilities, 2) differences in difficulty of presenting tasks, 3) trade-off between human interaction and performance and 4) the impact of task heterogeneity in the optimal service decision-making and system performance. The subsequent analytical and numerical exploration helps understand the way the decentralized decision-making scheme is affected by various service environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.