MISSION 5

The Glitch Zone: When AI Goes Wrong

“What happens when AI slips, glitches, or goes too far?”

Last time, you stepped into the Creative Tech Studio and saw how AI can help humans imagine, create, and include more people... when it’s guided with care.

Now you’re crossing into a darker district of Digital City:

The Glitch Zone

AI tools that were supposed to help are:

  • mislabeling people
  • inventing fake facts
  • repeating unfair patterns
  • quietly doing the work for students instead of with them
Every powerful tool has a shadow.
What happens when AI stops serving people, and people stop paying attention?

This mission’s big question:

How can we spot when AI is going wrong, and what can humans do differently to prevent harm?
Open Mission Entry Gate

The Glitch Office Investigation

The office should be a calm research space… but something’s gone wrong. Screens are glitching. Tools are misfiring. AI is acting on its own patterns.

Your mission: Scan the scene and find all 5 AI glitches.

Field Notes for Earth Command

When AI Goes Wrong (and Why It Needs You)

You’ve seen what AI mistakes look like on the surface.
Now we’ll step inside the The Glitch Zone to understand why they happen, and why your choices matter.

When AI Learns Unfair Things

AI learns from patterns in data. If that data reflects unfairness in the world, AI can repeat and amplify those patterns.

Notes for your report:

  • Some search engines showed racist and sexist results for certain groups.
  • A criminal justice risk-scoring tool mislabelled Black people as “high risk” more often than white people with similar records (the “Machine Bias” case).
  • Face recognition works better on some skin tones than others: darker-skinned people and women are more likely to be misidentified.

Why?

  • AI learns from past data.
  • If the past includes racism, sexism, or other inequalities, those patterns get “baked” into the model.
AI doesn’t decide to be unfair.
It copies the patterns it’s given.

Warning:

  • AI systems inherit inequalities from existing power structures and data practices.
  • Without careful checking, AI can reinforce stereotypes and unequal treatment.
  • Schools and communities need to audit and question AI tools.
When AI repeatedly gets things wrong for some people more than others, the problem is in the data and design, not in the people.
Hallucinations: When AI Sounds Smart but Is Totally Wrong

You’ve probably seen chatbots or generators write long, confident answers.

Large language models (LLMs) generate answers by predicting the most likely next word, not by checking if it’s true.

Because of that, they can:

  • Invent fake facts
  • Misquote people
  • Create references or sources that don’t exist

These made-up results are called hallucinations.

These weren’t just bugs. They’re “stochastic parrots”, systems that remix patterns without real understanding.

AI is trained to sound fluent, not to guarantee truth.

So

  • AI might give you a fake quote that sounds real.
  • It might create a study that never existed.
  • It might be completely wrong, but still sound confident.
If an AI answer sounds super confident, your job is to ask: “How do I know this is true?
Misinformation & Hidden Sources: When AI Uses Work You Didn’t See

AI can now create realistic:

  • Images
  • Videos
  • Voices
  • News-style articles

Used well, these tools can support creativity.

Used badly, they can spread misinformation and deepfakes.

Used carelessly, they can reuse someone’s ideas without credit.

Important detail to highlight in your report:

  • AI could influence public conversations.
  • People could have their image or voice faked to spread lies.
  • AI outputs may echo someone else’s work.
  • Cultural knowledge can be reused in ways communities didn’t intend.
  • If “anything can be faked,” people might stop trusting anything.
The more realistic AI-generated media becomes, the more important media literacy and critical thinking are.

It’s important to ask questions like:

  • Who made this?
  • Can I find the same info somewhere else?
  • Whose style or ideas might this be using?
  • Is this using someone’s knowledge in a fair way?
  • Who benefits if I believe this?
AI didn’t invent lying, but it gave lying powerful new tools.
That’s why your skills in spotting tricks matter more than ever.
Creativity Drain & Learning Loss: When AI Thinks Instead of You

Even when AI isn’t “attacking” anyone, it can quietly weaken how we learn.

When AI starts doing all the writing, solving, and explaining…

  • Your memory and reasoning don’t get enough practice.
  • You get used to instant answers instead of wrestling with tough questions.
  • You skip the “thinking time” that builds real understanding.

AI also tends to generate average or “safe” responses, because it copies common patterns:

  • Less weirdness
  • Less originality
  • Less personal voice (you)

Incoming report from Earth Headquarters:

  • Productive struggle (time spent stuck and thinking) is essential for real learning.
  • If AI steps in too quickly,, your own thinking muscles don’t grow.
  • Over time, learners might lose confidence in their own ideas and feel like AI is always “smarter.”
Think of AI like a powerful calculator for ideas.
Great when you already understand the problem.
Dangerous if you let it do all the thinking for you.
Hidden Costs: Labor, Jobs & the Planet

AI isn’t just floating in “the cloud.” It has real-world costs.

Environmental Impact

  • Training and running big AI models uses huge amounts of electricity.
  • Data centers need massive water for cooling.
  • This links AI to climate, land use, and resource extraction.

Hidden Human Labor

  • People label images, clean data, and review harmful content.
  • They often do tiring, emotionally heavy work.
  • Their labor is mostly invisible to everyday users.

Job Fears & Economic Shifts

  • People worry about AI replacing certain jobs.
  • In creative fields (theatre, design, writing), artists feel tension:
    AI can co-create and help, but it can also be used to undervalue human work.
      When you use AI, you’re not just using a “magic brain.”
      You’re using electricity, water, and people’s time/ideas.
      Being responsible means remembering those invisible costs.
      Cultural & Indigenous Concerns

      AI does not affect everyone equally.

      Some risks you need to note in the report:

      Missing Languages & Knowledge

      • Many Indigenous and minority languages are left out of training data.
      • They may be badly translated or treated as unimportant.
      • Important cultural knowledge can be erased or twisted.

      Data Sovereignty & Consent

      • Data about Indigenous communities has often been collected without consent.
      • AI built on that data can continue forms of colonial extraction.

      Different Ideas of “Intelligence”

      • Each culture has its own ways of knowing and deciding what counts as “smart.”
      Imagine an AI trained only on stories from one city or culture.
      Would it really understand your community? Your land? Your language?
      If not, that missing knowledge is a risk, not just an inconvenience.
      Why These Risks Make You Important

      When you pull all the clues together, a pattern appears across Digital City:

      • Bias & unfairness show that AI can repeat past injustice.
      • Hallucinations & misinformation show that AI can sound smart while being wrong.
      • Creativity drain & learning loss show that AI can weaken your own thinking if you overuse it.
      • Labor & environmental costs show that AI has real-world impacts on people and the planet.
      • Cultural & Indigenous concerns show that AI can erase, misrepresent, or harm communities when their voices are missing.

      But these findings don't signal doom. They signal opportunity.

      AI isn’t “the villain of the future.”
      It’s a powerful tool that can help or harm, depending on how people use it. Understanding its risks doesn’t mean you walk away.
      It means you walk in with your eyes open, ready to ask questions, notice patterns, and speak up when something feels wrong.
      Step Into Challenge Zone

      Sanny's Choices

      Proceed to Mission Report

      Reflection Log

      The Shadows We Found

      Thinking through harm, fairness, and what humans must do to stay in control.

      When an AI tool starts causing harm, confusion, or unfair treatment, what should humans do first? Who should be involved in the decision to pause, fix, or stop using it?

      What kind of harm can AI cause if humans stop paying attention?

      Where have you seen AI (or any tech tool) get something wrong in your own experience?

      Whose voices do you think get left out of AI training data?

      When an AI system makes a mistake for one person, who else might be affected?

      For example: If search results show stereotypes, how does that affect people who aren’t even using the computer?

      Next Mission