NIST contributes to the research, standards, evaluation, and data required to advance the development and use of trustworthy artificial intelligence (AI) to address economic, social, and national security challenges and opportunities. Working with the AI community, NIST has identified the following technical characteristics needed to cultivate trust in AI systems: accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) -and that harmful biases are mitigated. Mitigation of risk derived from bias in AIbased products and systems is a critical but still insufficiently defined building block of trustworthiness. This report proposes a strategy for managing AI bias, and describes types of bias that may be found in AI technologies and systems. The proposal is intended as a step towards consensus standards and a risk-based framework for trustworthy and responsible AI. The document, which also contains an alphabetical glossary that defines commonly occurring biases in AI, contributes to a fuller description and understanding of the challenge of harmful bias and ways to manage its presence in AI systems.
Key wordsbias, trustworthiness, AI safety, AI lifecycle, AI development provide their valuable feedback.
AudienceThe main audience for this document is researchers and practitioners in the field of trustworthy and responsible artificial intelligence. Researchers will find this document useful for understanding a view of the challenge of bias in AI, and as an initial step toward the development of standards and a risk framework for building and using trustworthy AI systems. Practitioners will benefit by gaining an understanding about bias in the use of AI systems.
Trademark InformationAll trademarks and registered trademarks belong to their respective organizations.
Note to ReviewersAs described throughout this report, one goal for NIST's work in trustworthy AI is the development of a risk management framework and accompanying standards. To make the necessary progress towards that goal, NIST intends to carry out a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and mitigation of harmful bias). This will require a concerted effort, drawing upon experts from within NIST and external stakeholders. NIST seeks additional collaborative feedback from members of the research, industry, and practitioner community throughout this process. All interested parties are encouraged to please submit comments about this draft report, and the types of activities and events which would be helpful, via the public comment process described on page 3 of this document. There will also be opportunities for engaging in discussions about and contributing to development of key practices and tools to manage Bias in AI. Please look for announcements for webinars, call for position papers, and request for comment on NIST document(s).