Autonomous systems are often deployed in complex sociotechnical environments, such as public roads, where they must behave safely and securely. Unlike many traditionally engineered systems, autonomous systems are expected to behave predictably in varying "open world" environmental contexts that cannot be fully specified formally. As a result, assurance about autonomous systems requires us to develop new certification methods and mathematical tools that can bound the uncertainty engendered by these diverse deployment scenarios, rather than relying on static tools.More specifically, autonomous systems increasingly use algorithms trained from data to predict and control behavior in previously unencountered contexts. The use of learning is a critical step to engineer autonomy that can successfully operate in heterogeneous contexts, including complex sociotechnical settings, but current certification methods are insufficient to address the dynamic, adaptive nature of learning.We propose the dynamic certification of autonomous systems-the iterative revision of permissible ⟨use, context⟩ pairs for a system-rather than prespecified tests that a system must pass to be certified. Dynamic certification offers the ability to "learn while certifying, " thereby opening additional opportunities to shape the development of an autonomous technology. This comprehensive, exploratory testing shaped by proposed deployment can enable iterative selection of appropriate contexts of use. More specifically, we propose dynamic certification and modeling involving three testing stages: early-phase testing, transitional testing, and confirmatory testing. Movement between testing stages is not unidirectional; we can shift in any direction depending on our current state of knowledge and intended deployments.We describe these stages in more detail below, but the key is that these stages enable system designers and regulators to both learn about and assure that engineered autonomous systems will operate within the bounds of acceptable risk.