Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military [1,2,3]. To adapt public policy, we need to better anticipate these advances [4,5]. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Survey experiments often manipulate the description of attributes in a hypothetical scenario, with the goal of learning about those attributes’ real-world effects. Such inferences rely on an underappreciated assumption: experimental conditions must be information equivalent (IE) with respect to background features of the scenario. IE is often violated because subjects, when presented with information about one attribute, update their beliefs about others too. Labeling a country “a democracy,” for example, affects subjects’ beliefs about the country’s geographic location. When IE is violated, the effect of the manipulation need not correspond to the quantity of interest (the effect of beliefs about the focal attribute). We formally define the IE assumption, relating it to the exclusion restriction in instrumental-variable analysis. We show how to predict IE violations ex ante and diagnose them ex post with placebo tests. We evaluate three strategies for achieving IE. Abstract encouragement is ineffective. Specifying background details reduces imbalance on the specified details and highly correlated details, but not others. Embedding a natural experiment in the scenario can reduce imbalance on all background beliefs, but raises other issues. We illustrate with four survey experiments, focusing on an extension of a prominent study of the democratic peace.
Justifications for war often invoke reputational or social aspirations: the need to protect national honor, status, reputation for resolve, credibility, and respect. Studies of these motives struggle with a variety of challenges: their primary empirical manifestation consists of beliefs, agents have incentives to misrepresent these beliefs, their logic is context specific, and they meld intrinsic and instrumental motives. To help overcome these challenges, this review offers a general conceptual framework that integrates their strategic, cultural, and psychological logics. We summarize important findings and open questions, including (a) whether leaders care about their reputations and status, (b) how to address the tension between instrumental and intrinsic motives, (c) how observers draw inferences, (d ) to whom and across what contextual breadth these inferences apply, and (e) how these relate to domestic audience costs. Many important, tractable questions remain for future studies to answer.
rtificial-intelligence assistants and recommendation algorithms interact with billions of people every day, influencing lives in myriad ways, yet they still have little understanding of humans. Self-driving vehicles controlled by artificial intelligence (AI) are gaining mastery of their interactions with the natural world, but they are still novices when it comes to coordinating with other cars and pedestrians or collaborating with their human operators.The state of AI applications reflects that of the research field. It has long been steeped in a kind of methodological individualism. As is evident from introductory textbooks, the canonical AI problem is that of a solitary machine confronting a non-social environment. Historically, this was a sensible starting point. An AI agent -much like an infant -must first master a basic understanding of
The "democratic peace"-the inference that democracies rarely fight each other-is one of the most important and empirically robust findings in international relations (IR). This article surveys the statistical challenges to the democratic peace and critically analyzes a prominent recent critique (Gartzke 2007). Gartzke's claim that capitalist dynamics explain away the democratic peace relies on results problematically driven by (1) the censoring from the sample of observations containing certain communist countries or occurring before 1966, (2) the inclusion of regional controls, and (3) a misspecification of temporal controls. Analysis of these issues contributes to broader methodological debates and reveals novel characteristics of the democratic peace. Gartzke and other critics have contributed valuably to the study of IR; however, the democratic peace remains one of the most robust empirical associations in IR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.