Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
Date
2022-03Author
Schwartz, Reva
Vassilev,Apostol
Greene,Kristen
Perine,Lori
Burt, Andrew
Hall,Patrick;NIST
Abstract
This document addresses the challenge of bias in artificial intelligence (AI) systems and its impact on public trust. It emphasizes a socio-technical perspective, recognizing systemic, statistical, and human factors as sources of harm. Current remedies focus on computational factors, but the document calls for a broader approach, connecting practices to societal values. It outlines the stakes and challenges, identifies bias categories, and introduces preliminary guidance for mitigation, emphasizing transparency, datasets, testing, human factors, and operationalizing values. Acknowledging the impossibility of zero bias risk, the National Institute of Standards and Technology aims to develop flexible methods and governance practices. NIST will contribute to this field by measuring biases, creating guidance, and fostering ongoing discussions with stakeholders.