As artificial intelligence (AI) continues to develop rapidly and influence numerous applications affecting billions of lives, it is crucial to form AI red teams whose objective is to identify AI-enabled system vulnerabilities before deployment to reduce likelihood or severity of real-world security risks. In response, we present a playbook to establish a formalized and repeatable process for AI red teaming. By describing the process as part of a larger framework known as Build-Attack-Defend (BAD), we define a collaborative process between the AI-enabled system development and security teams, as well as various stakeholders. Complementing An AI Blue Team Playbook, this paper contains the red teaming historical context, process, and lessons learned, serving as a starting point for proactively identifying weaknesses, enhancing the overall performance, security, and resilience of AI-enabled systems.