Understanding AI Bias

Written by Coursera Staff • Updated on

AI models play a crucial role in modern decision-making, but addressing bias ensures they work fairly for everyone. Explore how AI bias happens and ways to develop more equitable models.

[Featured Image] A team of developers are working on creating a robot while paying strict attention to the programming as to not allow for AI bias in machine learning.

AI bias is when artificial intelligence models perpetuate and reinforce human bias, sometimes with harmful real-world consequences. Researchers have demonstrated that AI models can be based on data containing bias and follow rules tainted by human bias, which can seep into the team’s AI programming. Learn more about how bias happens and what steps you can take to prevent bias in your AI models. 

What is an example of AI bias?

Researchers have identified many examples of how AI bias can occur with real-world implications for people involved. Consider these examples of AI bias and how these biases could potentially impact individuals:

  • Speech recognition bias against non-American accents: Voice command or speech recognition technology built by American companies often makes errors when interpreting the spoken language of people without MidAtlantic North American accents. This forces individuals to adapt how they speak to use this technology, which presents many problems. If users cannot change their accent to the required accent, they cannot use this technology. 

  • Bias against written work of non-native English speakers: Educational institutions use an AI model to detect computer-generated text. A college or university might use this model to detect cheating learners using a generative AI model like ChatGPT. Research demonstrates that these AI models frequently misclassify work by non-native English speakers as AI-generated text. This creates inequity between non-native and native English speakers, marginalizing the former group. It unfairly discriminates against non-native English-speaking learners while damaging the integrity of the anti-cheating system. 

  • AI bias in health care: Bias in AI algorithms can lead to inequitable patient care in health care settings. For example, a hospital could use an algorithm to predict the likelihood of patients requiring additional care in the future. The algorithm could consider a variety of factors, including the amount each individual spent on health care in the past. Heavily weighing this factor could lead to a bias against patients who have struggled to afford health care in the past, which is typically tied to factors like the race and socioeconomic background of the patient. As a result, these patients may be less likely to receive critical care in the future. 

  • Perpetuating stereotypes in careers and language: AI models may inadvertently associate professions with certain genders, reflecting historical patterns in training data. For example, a generative AI model might assume a flight attendant is a woman and a lawyer is a man. This can have negative real-world implications when AI models screen candidates for potential jobs in these fields and discriminate against the underrepresented sex. It can also manifest in search engine algorithms by returning biased job ads for men and women searching for work in a given field. 

  • Predictive policing: A police department could use an algorithm to help plan how it will use its resources, such as using an AI model to predict which areas require more of a police presence. This sort of model could rely on historical data on police or criminal activity in the area. Any existing patterns of bias in policing, like disproportionately arresting black community members and racial profiling in minority communities, could become ingrained as an AI bias. 

These examples illustrate how bias present in society can find its way into AI algorithms. Researchers recently found that biased AI models can also influence human decision-making. The study published in the journal Scientific Reports demonstrates the cyclical nature of AI bias [1]. Humans can influence AI to have bias, which can then influence humans to have bias even when not working with the AI. 

Importance of addressing AI bias

Not only are individuals harmed by AI bias, but companies and organizations can also be. For example, a bank uses an AI algorithm to determine the most qualified candidates for a loan. The algorithm used to determine who gets a loan has a bias against people who are not white. In this situation, everyone involved suffers. First, the qualified applicants who are not white are denied a loan. Second, the individuals who jump the list might not be in the best position to responsibly take on the loan despite the algorithm’s prediction. This could put them in a difficult financial position later. The bank suffers because the algorithm didn’t give them accurate information, so they could not work with the most qualified loan applicants. 

What are three sources of bias in AI?

Three sources of bias in AI are the training data itself, errors in how the algorithm processes data, and human bias. Explore how these three sources influence the algorithm. 

Data bias

Bias can start with biased data. In the example of police officers using an AI algorithm to predict future crime, even the most unbiased police department would have to reckon with historical data that perpetuates structural racism in the form of overpolicing of minorities. Building an AI model to interpret that data would likely result in an AI model that recommended biased actions. 

Algorithmic bias

Another place where bias can be found is within algorithmic decision-making based on experiential data. Consider, for example, an algorithm that Amazon released in 2014 only to pull it back a year later. This algorithm could read resumes and recommend which individuals were more qualified for the job, ostensibly saving recruiters time without having to wade through the resumes themselves. Amazon’s team trained the algorithm using data from applications submitted to Amazon over the years. 

These applications mostly came from male applicants, which reflects the state of the overall tech industry, which tends to be male-dominated. Amazon’s algorithm used this data and trained itself to decide that men were preferable applicants to women based on the historical training data. Even when Amazon’s development team scraped the data to remove references to the applicant’s sex in resumes, the algorithm picked up on proxy data that indicated the applicant's sex. This is an example of algorithmic bias. 

Human bias

Another place bias can start in AI models is human or cognitive bias. Everyone, from software developers to organizational stakeholders, holds unconscious biases that shape how we think and act. These unconscious biases within a software development team can lead to bias in an algorithm. For example, a company might intend to implement an algorithm around the world but use only training data from the United States, which may mean the algorithm will only be effective for the company’s American users. 

What are some other types of AI bias? 

AI models and algorithms can err in several different ways. Some types of AI bias to be aware of include: 

  • Stereotyping bias: Stereotyping bias happens when an algorithm learns to reinforce stereotypes, such as assuming that an American with Latin heritage speaks Spanish. 

  • Prejudice bias: Prejudice bias happens when an AI model makes assumptions after observing patterns, such as assuming that a nurse must be a woman and a doctor must be a man. 

  • Selection bias: Biased sampling, incomplete data, or other problems that lead to an otherwise flawed data set can result in a selection bias. 

  • Measurement bias: Similar to a selection bias, incomplete data or data that doesn’t fully capture the information your AI model needs can result in a measurement bias. For example, you may collect feedback only from customers who purchase from your company. The resulting data set wouldn’t provide accurate insight into why customers might choose a competitor brand over yours. 

How can we stop AI bias?

While bias is an inevitable part of the human experience, you can take steps to prevent bias in AI models and algorithms. Consider a few strategies for managing AI bias: 

  • Human-in-the-loop: A human-in-the-loop system requires a programmer or AI researcher to be “in the loop” to provide feedback to the algorithm and help the AI system solve problems. This can help prevent AI bias by allowing the developer to oversee the model’s iterative results and step in to correct its logic.

  • Counterfactual fairness: Counterfactual fairness is a way to address AI bias by asking if the model would still provide the same outcome in a hypothetical world where the potentially discriminatory aspects of the data, like race, religion, or gender, were different. 

  • Strive for transparency: One way to fight bias in AI models is to open your AI model to scrutiny. By implementing the most transparent process possible, you can put yourself in a position to get real, actionable feedback from people who are observing your system. 

Learn about AI bias on Coursera.

You can take steps to minimize bias in your artificial intelligence models. Consider a course to help you learn about reducing bias and developing AI applications. For example, you could enroll in the IBM AI Developer Professional Certificate on Coursera. This 10-course series will connect you with professional-level training from IBM to help you start a career as an AI developer. 

Article sources

  1. 1. Nature. “Humans Inherit Artificial Intelligence Bias, https://www.nature.com/articles/s41598-023-42384-8.” Accessed April 17, 2025.

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.