You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

3.1 Types and sources of bias in AI systems

4 min readaugust 15, 2024

AI bias is a critical issue in modern technology. From algorithmic and to cognitive and historical biases, these flaws can lead to unfair outcomes and perpetuate societal inequalities. Understanding the types and sources of bias is crucial for developing ethical AI systems.

Data collection, feature engineering, and all contribute to AI bias. These issues can have serious consequences in employment, finance, law enforcement, and healthcare. Real-world examples highlight the urgent need for addressing bias in AI to ensure fair and equitable outcomes for all.

Types of AI Bias

Algorithmic and Selection Bias

Top images from around the web for Algorithmic and Selection Bias
Top images from around the web for Algorithmic and Selection Bias
  • causes systematic errors in AI systems leading to unfair outcomes
  • Selection bias occurs when training data misrepresents the target population
    • Skews model performance for underrepresented groups
    • Can amplify existing societal inequalities
  • results from over- or under-representation of certain groups in data
    • Impacts model accuracy for specific demographics (ethnic minorities, age groups)

Cognitive and Historical Bias

  • Confirmation bias stems from developers favoring data supporting preexisting beliefs
    • Can reinforce stereotypes or flawed assumptions in AI systems
  • perpetuates societal prejudices present in training data
    • Reproduces discriminatory patterns from past decisions (hiring practices, lending)
  • arises when features inaccurately represent intended concepts
    • Leads to flawed predictions or classifications (using zip codes as proxies for race)

Aggregation and Model-Specific Bias

  • occurs when models fail to account for subgroup differences
    • Results in poor performance for specific groups within the population
    • Can mask disparities in model accuracy across demographics
  • Model architecture choices impact types and extent of bias in AI systems
    • Different algorithms may exhibit varying levels of fairness (decision trees vs. neural networks)
    • Hyperparameter tuning can inadvertently introduce or amplify biases

Sources of AI Bias

  • Data collection methods introduce bias through sampling techniques
    • Online surveys may exclude certain demographics (elderly, low-income)
    • Convenience sampling can lead to non-representative datasets
  • Imbalanced training data causes biased outputs for underrepresented groups
    • Facial recognition systems trained primarily on light-skinned faces
    • Speech recognition models struggling with accents or dialects
  • Data labeling processes can inject human biases into AI systems
    • Inconsistent or subjective labeling of training data
    • Cultural biases in image or text classification tasks

Feature Engineering and Algorithm Design

  • emphasizes or de-emphasizes certain attributes
    • Excluding relevant features can lead to incomplete model representations
    • Including sensitive attributes may result in direct discrimination
  • Choice of algorithms impacts bias presence in AI systems
    • Some models are more interpretable, allowing for easier bias detection (linear regression)
    • Complex models may obscure biases within their decision-making processes (deep neural networks)
  • correlate with protected attributes, introducing unintended bias
    • Using zip codes as a proxy for race in lending decisions
    • Educational background as a proxy for socioeconomic status in hiring

Human Factors and Feedback Loops

  • Developer biases unconsciously encoded during system development
    • Personal experiences and cultural backgrounds influence design choices
    • Lack of diverse development teams can lead to blind spots in bias detection
  • in deployed systems amplify existing biases over time
    • Biased predictions influence future data collection (targeted advertising)
    • Self-reinforcing cycles in recommendation systems (content personalization)

Impact of Biased AI

Employment and Financial Consequences

  • Biased AI in hiring perpetuates workplace inequalities
    • Automated resume screening favoring certain demographics
    • Interview analysis systems misinterpreting cultural communication styles
  • Credit scoring and loan approval systems limit financial opportunities
    • Denying loans to qualified applicants from minority groups
    • Offering higher interest rates based on biased risk assessments

Law Enforcement and Criminal Justice

  • Facial recognition systems lead to false identifications
    • Disproportionate surveillance of marginalized communities
    • Wrongful arrests due to misidentification (Robert Williams case in Detroit)
  • Automated decision-making in criminal justice perpetuates systemic racism
    • Biased risk assessment tools influencing bail and sentencing decisions
    • Predictive policing algorithms reinforcing over-policing in certain neighborhoods

Healthcare and Social Implications

  • Biased AI in healthcare results in misdiagnoses and inadequate treatment
    • Underdiagnosis of skin conditions in patients with darker skin tones
    • Gender bias in symptom recognition for heart attacks
  • Content recommendation systems create echo chambers and filter bubbles
    • Amplification of extreme viewpoints in social media feeds
    • Limited exposure to diverse perspectives, increasing societal polarization
  • Large language models generate and amplify stereotypes
    • Reinforcing gender biases in occupation-related text generation
    • Propagating cultural stereotypes in creative writing applications

Real-World AI Bias Examples

Criminal Justice and Law Enforcement

  • COMPAS recidivism prediction tool exhibited racial bias in risk assessments
    • Overestimated recidivism risk for Black defendants
    • Underestimated risk for white defendants with similar profiles
  • Facial recognition systems used by law enforcement show lower accuracy for minorities
    • Higher false positive rates for people of color (NIST study)
    • Gender bias with lower accuracy for women, especially women of color

Employment and Financial Services

  • Amazon's experimental AI recruiting tool showed bias against women candidates
    • Penalized resumes containing words like "women's" (women's chess club)
    • Favored language patterns more common in male applicants' resumes
  • Apple Card credit limit controversy revealed gender bias in financial algorithms
    • Women offered lower credit limits than men with similar financial profiles
    • Highlighted issues of transparency in AI-driven financial decision-making

Technology and Healthcare Applications

  • Google Photos image recognition system mislabeled Black people as "gorillas"
    • Exposed racial bias in computer vision algorithms
    • Highlighted importance of diverse training data in image recognition
  • Healthcare AI systems perform less accurately on darker skin tones
    • Skin cancer detection algorithms showed lower sensitivity for darker skin
    • Pulse oximeters overestimating oxygen levels in Black patients
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary