How to Eliminate Bias in AI

Posted by Aible on Mar 25, 2022 2:34:08 PM

How to Eliminate Bias and Ensure Fairness in AI.

Eliminating bias in AI has emerged as one of the most important challenges in technology. With AI increasingly being used to make important decisions in areas such as hiring, college admissions, medicine, and bank lending, it’s critical that these decisions don’t discriminate based on race, gender, age, or other factors.

The stakes are high. A Gartner report predicts that through 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. Bias in AI leads to unfair outcomes, the perpetuation of systemic racism, and serious reputational damage and costly failures for businesses.

Unfortunately, many of the approaches that aim to mitigate AI bias don’t fix the problem. On the contrary, they often perpetuate bias and make it even harder to root out. Reactively solving these problems via best practices and governance is unlikely to scale. We need to solve them by design.

Consider this example in bank lending. If historically, a bank has not lent money to certain racial groups or genders as often as others, and the AI trains on that historical data, the recommendations of the AI will perpetuate the bias. That’s because AI learns only from data. If there’s bias inherent in your data, that will become further institutionalized when you use it to train your AI.

The conventional “solution” to this problem is to remove the race or gender variable from the AI so that it can’t directly consider those factors. Problem solved, right?

Wrong.

Read More

Comments