Health care organizations are grappling with how to discover and mitigate the risks of artificial intelligence (AI) and associated algorithms worsening racial, ethnic, and socioeconomic health disparities. For example, the potential for patient harm has been shown across a variety of conditions, including inequitable access to timely care due to biased pulse oximeters and inequitable access to care management due to label choice bias. 1,2 The risk of perpetuating these harms is significant, hence multiple stakeholders aim to hold health care organizations and the health care industry responsible for harm and discrimination, if it occurs.The evidence review by Chin and colleagues 3 developed a set of guiding principles to mitigate and prevent bias in health care algorithms. Their guiding principles add precision and specificity that go beyond broad mandates, such as the White House Blueprint for an AI Bill of Rights and National Institute of Standards and Technology AI Risk Management Framework. However, there may be challenges to integrate these principles with other mandates and operationalizing these guiding principles into practice for health care and technology organizations.To help address these issues, consortia such as Health AI Partnership and Coalition for Health AI aim to curate best practices for health care professionals to use AI safely, effectively, and equitably. 4 Translating AI research into policy that then guides best practices clears only 1 hurdle. The other hurdles, which may be even larger, relate to the scarce people, processes, technology infrastructure, and operational supports required for health care organizations to successfully adopt and implement best practices. To do so, we highlight opportunities to close the translation gap from principles surfaced by Chin and colleagues into practice. 3