From the course: Leveraging AI for Small and Medium Business Growth

Unlock this course with a free trial

Join today to access over 24,700 courses taught by industry experts.

Understanding AI systems' biases

Understanding AI systems' biases

- [Instructor] AI systems can be biased due to biases in the data they're trained on, or biases in the algorithms themselves. These biases can lead to unfair or inaccurate decisions, seriously affecting individuals and society. In other words, biases violate the principle of fairness, are unethical, and can be illegal, even if the firm is not aware of the biases that AI systems can create. Data biases can occur when the data used to train an AI system does not represent the real world. For example, if a facial recognition system is trained on a data set composed of white faces, it may not accurately recognize people of color. Imagine a medium-sized lending firm using AI to assess creditworthiness. If their historical data reflects past societal biases, for example, favoring specific demographics, the AI model might unintentionally perpetuate those biases by denying loans to deserving individuals from underrepresented groups. This harms those individuals and limits the firm's potential…

Contents