-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Bias in AI: Why it happens and how to fix it

Article Featured Image

AI bias is an issue that poses serious concerns for businesses today. At a minimum, bias results in ineffective decision making, lowers productivity, and hurts ROI. However, these performance problems are only the tip of the iceberg.

AI bias—which comes in many forms, such as race or gender bias—can lead to discrimination, negatively impact lives, and create severe legal issues for organizations. Businesses with biased AI have lost millions of dollars in lawsuits and suffered significant reputational damage. The rapid growth of AI means more businesses will face similar struggles if they don’t take action to mitigate AI bias. It is important for businesses to learn about the risks and costs associated with bias, and how to protect against them.

What causes AI bias?

  • Incomplete datasets

An AI model's training is based on a long series of true/false scenarios. The model is given a scenario and told whether it is either positive or negative. A common example of this is the image verification (CAPTCHA) prompts that ask users to select all images of a certain criteria (select each picture that has a fire hydrant, for example). The model relies on that data in the future. By this process, the AI “learns” what a fire hydrant looks like. According to TowardsDataScience, even a simple image classification requires thousands of images to train.

An AI model’s learning process is primitive compared to a human’s and requires massive amounts of information. Because AI doesn’t involve reason so much as trial and error, the training process is delicate. Incomplete information can result in models being trained incorrectly. The result? Biased and faulty algorithms. For example, if two people in a data set are over six feet tall, and both people have poor credit, the AI may determine that every person who is six feet tall or over has poor credit. This is how discrimination happens. 

  • Human prejudice

Humans are flawed, and our flaws permeate into the design of AI models. Any prejudice a designer has may be reflected within a model’s structure and output, even if unintentional. Every variable may disproportionately impact groups of people with different backgrounds in different ways.

Prejudice is a big part of why credit scores are being deemed inherently racist. Though it might seem strange to think that an objective number that measures spending and payment habits is racist, the unfortunate reality is that credit scores are entangled within a history of social inequalities. Even the most financially diligent people may be faced with lower credit scores due to less financial support from their elders. In this way, the generational wealth gap causes bias against marginalized groups in many automated systems.

A developer of a different background might not recognize bias in credit scores and may weigh credit differently in AI models as a result. This is the issue that must be tackled: Biased data is pulled from a biased society made up of biased people, whose opinions are based on their own personal life experiences. Creating a truly unbiased model requires fighting against the tangled web of human prejudice, a difficult but necessary task.

A great resource for learning more about systemic and unconscious bias is “Blindspots: Hidden Biases of Good People.” Authored by two professors, Anthony Greenwald of the University of Washington and Mahzarin Banaji of Harvard University, the book demonstrates how unconscious preferences manifest themselves. Not only is it influential in social psychology, it is a recommended read for everyone working with AI.

Solving AI bias

While no system is perfect, there are ways to minimize bias in AI through people, processes, technology, and regulations.

Build diverse teams: To account for logical bias (what variables to use, how they impact results) in development, construct teams with individuals from diverse backgrounds. Diversity allows teams to pull from a large pool of unique life experience. Building a diverse data science team will help prevent any single background and associated views from dominating the development process.

Conduct post hoc analysis: After a model produces results, the data and model results should be analyzed to check for bias before the model reaches the market. Typically, this is done by training multiple copies of the same AI with different data sets to isolate variables and check for any problematic patterns.

Create process-based transparency: Transparency is an additional layer of defense against bias between development and market. The best way to ensure transparency is through documentation. Thorough documentation allows for easy review of model logic in a way that’s understandable across various specializations and skill levels. By knowing exactly what a model does and how, business owners can make educated decisions, and properly implement the model.

Monitor continuously: No data scientist should claim that their AI model is 100% accurate (if they do, you should be very wary.) Every AI model should be continuously monitored throughout the duration of its life cycle. AI performance and results should be analyzed on a regular basis to ensure each model functions within risk guidelines. Proactive businesses should take this a step further and ask customers for their personal experience with AI, seeking points of contention and a better understanding of where models are lacking.

Encourage and support regulations: To truly help rid AI of bias, it is important to advocate for regulations. Increasing calls for AI regulations as a way to combat AI harms are supported by academia, industry leaders, NGOs, and the general public around the world. In the past three years, over 300 new AI guidelines and principles have been developed. Governing agencies such as the Federal Reserve Board in the U.S., OSFI in Canada, and the PRA in the U.K. have all been soliciting feedback on the use of AI from academia, industry experts, and members of the general public. The European Commission came out with its first draft of AI Regulations on April 21, 2021, by adopting the OECD AI Principles that were established  in 2019. These are the same entities that introduced the world to Data Privacy principles which became GDPR—the gold standard for data privacy regulations around the world.

A recent survey found that 81% of technology leaders who use AI think government regulations would be helpful to define and prevent bias. Working together to support and adopt regulations is critical.

In short, AI bias is a real threat. It’s harmful to businesses and the general public. Ranging from being a negative ROI to a plague on society, bias in AI is a problem to be solved, not ignored. Proper due diligence should be expected when creating AI, and this includes proper training, documentation, and monitoring of AI models. When handled correctly, AI is an extremely powerful tool with the power to shape our society. We can strive to move down that path by carefully considering the people, processes, technology, and regulations used to advance AI.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues