In this article we analyze a particular model of control among intelligent agents, that of nonabsolute control. Non-absolute control involves a "supervisor" agent that issues orders to a "subordinate" agent. An example might be a human agent on Earth directing the activities of a Mars-based semi-autonomous vehicle.Both agents operate with essentially the same goals. The subordinate agent, however, is assumed to have access to some information that the supervisor does not have. The agent is thus expected to exercise its judgment in following orders (i.e., following the true intent of the supervisor, to the best of its ability). After presenting our model, we discuss the planning problem: how would a subordinate agent choose among alternative plans? Our solutions focus on evaluating the distance between candidate plans, and is appropriate to any scenario in which one agent wants to follow (as much as possible) another agent's plan.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.