Whether and how Lethal Autonomous Weapons Systems (LAWS) can and should be regulated is intensely debated among governments, scholars, and campaigning activists. This article argues that the strategy of the Campaign to Stop Killer Robots to obtain a legally binding instrument to regulate LAWS within the framework of the United Nations Convention on Certain Conventional Weapons is not likely to be effective, as it is modeled after previous humanitarian disarmament successes and not tailored to the specifics of the issue. This assessment is based on a systematic comparison of the autonomous weapons case with the cases of blinding laser weapons and anti-personnel landmines that makes use of an analytical framework consisting of issue-related, actor-related, and institution-related campaign strategy components. Considering the differences between these three cases, the authors recommend that the LAWS campaign strategy be adjusted in terms of institutional choices, substance, and regulatory design. KEYWORDS Humanitarian disarmament; anti-personnel landmines; blinding laser weapons; convention on certain conventional weapons; artificial intelligence; lethal autonomous weapons systems Humankind is on the cusp of the fourth industrial revolution. How we live, work, and communicate is changing. A key feature of this new epoch is automation, enabled by breakthroughs in artificial intelligence (AI). 1 Machines today are able to perform more numerous and complex tasks with minimal or no human assistance or supervision. Naturally, militaries around the globe also intend to benefit from this development. As a result, what has come to be