Abstract. With the rapid advancement of powerful artificial intelligence, visionaries such as Stephen Hawking warn that we could be architecting our own extinction. Visionary efforts such as the OpenAI project, and the Ethics and Governance of Artificial Intelligence Fund are key lifeboats to proactively engineer ethics into our technological children, and create other safety strategies to mitigate the likelihood of dangerous AI. Engineering a science of safe AI requires, as a foundational element, an approach to measurement that allows subsequent risk analysis and mitigation methods to be evaluated with meaningful, linear, accurate and precise methods that span a variety of disciplines that contribute to risk variance in organizational, process and team outcomes.