From the course: Understanding and Implementing the NIST AI Risk Management Framework (RMF)
Unlock the full course today
Join today to access over 24,700 courses taught by industry experts.
Fair, with harmful bias managed: Section 3.7
From the course: Understanding and Implementing the NIST AI Risk Management Framework (RMF)
Fair, with harmful bias managed: Section 3.7
- [Instructor] The final characteristic of a trustworthy AI system is fair with harmful bias managed. This means an AI system would demonstrate equality and equity as it protects society from harmful discrimination. An AI system without harmful bias can still be unfair when the benefits of that system are not accessible to people experiencing disabilities or socioeconomic disadvantages. Biases can appear in a system without conscious, malcontent, prejudice or racism. NIST defines three empirical representations of bias that must be identified and managed. Systemic are part of organizations, institutions, and societies from which the AI systems are created. Like the most entrenched and hidden part of an iceberg, these can be the most difficult to surface as they are part of the worldview and perception of those who impose them on the AI system. Human biases are typically influenced by the group societies and systems from which they come. Their biases are implemented throughout the AI…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
(Locked)
Trustworthiness, valid, and reliable: Sections 3–3.14m 25s
-
(Locked)
Safe, secure, resilient, accountable, and transparent: Sections 3.2–3.43m 51s
-
(Locked)
Explainable, interpretable, and privacy: Sections 3.5–3.63m 6s
-
(Locked)
Fair, with harmful bias managed: Section 3.73m 14s
-
(Locked)
Effectiveness: Section 43m
-
(Locked)
-
-
-