From the course: Understanding and Implementing the NIST AI Risk Management Framework (RMF)

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Fair, with harmful bias managed: Section 3.7

Fair, with harmful bias managed: Section 3.7

- [Instructor] The final characteristic of a trustworthy AI system is fair with harmful bias managed. This means an AI system would demonstrate equality and equity as it protects society from harmful discrimination. An AI system without harmful bias can still be unfair when the benefits of that system are not accessible to people experiencing disabilities or socioeconomic disadvantages. Biases can appear in a system without conscious, malcontent, prejudice or racism. NIST defines three empirical representations of bias that must be identified and managed. Systemic are part of organizations, institutions, and societies from which the AI systems are created. Like the most entrenched and hidden part of an iceberg, these can be the most difficult to surface as they are part of the worldview and perception of those who impose them on the AI system. Human biases are typically influenced by the group societies and systems from which they come. Their biases are implemented throughout the AI…

Contents