Just as our interactions with other people are shaped by our concepts about their beliefs, desires, and goals (i.e., "theory of mind"), our interactions with intelligent technologies such as robots are shaped by our concepts about their internal operations. Multiple studies have demonstrated that people attribute anthropomorphic features to technological agents in certain contexts, but researchers remain divided on how these attributions arise: What default assumptions do people make about the internal operations of intelligent technology, and what events or additional information cause us to alter those default assumptions? This article explores these open questions and some of their implications for law and policy. First, we review psychological research exploring people's attributions of agency, with particular focus on attributions to technological entities. Next, we define and describe one popular account of this research-a "promiscuous agency" account that assumes a reflexive tendency to broadly attribute humanlike properties to technological agents. We then summarize mounting evidence that people are often more cautious in attributing human properties than the promiscuous agency account suggests. We seek to integrate the mounting evidence for a "selective agency" account with the promiscuous agency account through the transition model of agency. Finally, we explore how selective agency, promiscuous agency, and the transition model relate to a sample of robotics law and policy issues. We address, in turn, issues related to Fourth Amendment protection, copyright law, statutory and regulatory interpretation, and negligence litigation, identifying specific implications of the transition model of agency for each issue.Social interaction is a foundational component of the human experience, so it is unsurprising that a wealth of psychological research explores how people think about and behave toward others. Historically, this research has focused on thoughts and behavior toward other people. Yet, in order to successfully navigate our world, we must interact with a multitude of entities that are not people. In these interactions, it is helpful, if not essential, for us to distinguish things that are capable of thinking and engaging in goaldirected behavior from those that are not.As demonstrated by Woodward (1998), we develop the ability to distinguish goal-driven agents from non-goal-driven objects early in life. In Woodward's experiments, infants repeatedly observed either a human actor's hand or an inanimate stick reaching for one of two toys (a bear and a ball) on a stage. After enough repetitions, infants habituated to the scene-meaning their response (measured in looking time) decreased until hitting a minimum. After habituation, the locations of the two toys were swapped for a test trial. On the test trial, the human hand or stick either reached to the same location for a different toy or to a different location for the same toy. Nine-month-old infants looked longer (indicating surprise) when the human han...