IN THIS LESSON

Catch the Bias Before It Catches You

Every AI pipeline has three pressure points where bias can sneak in. You’ll spot one risk in Data, Model, and Output, then write a safeguard question that forces any vendor—or classroom project—to confront that risk.


Quick Context: The Bias Pipeline

Before you start your reflection, remember:

Where Bias Hides:

1. DATA Stage 🗂️

  • Missing populations in training data

  • Historical biases baked into past examples

  • Sampling errors (who gets included/excluded)

3. OUTPUT Stage 📊

  • Misleading confidence scores

  • Biased default settings

  • Presentation that hides uncertainty

For OUTPUT Bias:

Imagine how the display affects teacher decisions:

  • Color coding (red = bad?)

  • Ranking students numerically

  • Binary classifications

  • "Confidence" without context

2. MODEL Stage 🧮

  • Overfitting to majority patterns

  • Amplifying small biases into big ones

  • Hidden correlations the model discovers

Real Classroom Examples:

  • Essay grader trained only on AP English essays → struggles with ESL writing

  • Math helper trained on standard notation → confused by international formats

  • Behaviour predictor showing 95% confidence → teacher over-relies on predictions


Part 1: Reflection Form

Instructions: Complete each stem with specific, concrete examples (keep each under 40 words). Think about your actual classroom and students.

Fill in the blanks:

DATA

"If the dataset mostly comes from __________, the model might ignore __________."

Example: "...suburban schools with high test scores, the model might ignore strategies that work for rural students or those with learning differences."

MODEL

"If the algorithm over-fits, it could __________."

Example: "...memorise specific writing patterns from its training data and penalise creative but valid approaches to assignments."

OUTPUT/UI

"A misleading confidence score could make teachers __________."

Example: "...skip their own assessment and miss a student's breakthrough moment because the AI showed 90% confidence in a wrong prediction."

SAFEGUARD QUESTION

"Before adopting this tool, I will ask: __________?"

Example: "What specific student populations were included in your training data, and how did you validate performance across different demographics?"


Pro Tips for Strong Reflections

For DATA Bias:

Think about who's not in the room when data gets collected:

  • Students without reliable internet

  • Non-native English speakers

  • Students with disabilities

  • Different cultural backgrounds

For MODEL Bias:

Consider what patterns get rewarded vs punished:

  • Standard vs creative approaches

  • Formal vs informal communication

  • Speed vs depth of thinking

  • Compliance vs innovation

For Safeguard Questions:

Make them specific and answerable:

  • ❌ "Is your AI biased?"

  • ✅ "What percentage of your training data came from schools?"

  • ✅ "How do you measure performance gaps across student subgroups?"

  • ✅ "Can teachers see which features most influenced each prediction?"

    • Create a "Bias Checklist" for your department

    • Interview your IT director about current AI tools

    • Design a student lesson on algorithmic bias