MISSION 5
Last time, you stepped into the Creative Tech Studio and saw how AI can help humans imagine, create, and include more people... when it’s guided with care.
Now you’re crossing into a darker district of Digital City:
The Glitch Zone
AI tools that were supposed to help are:
Every powerful tool has a shadow.
What happens when AI stops serving people, and people stop paying attention?
This mission’s big question:
How can we spot when AI is going wrong, and what can humans do differently to prevent harm?

The office should be a calm research space… but something’s gone wrong. Screens are glitching. Tools are misfiring. AI is acting on its own patterns.
Your mission: Scan the scene and find all 5 AI glitches.
Field Notes for Earth Command
You’ve seen what AI mistakes look like on the surface.
Now we’ll step inside the The Glitch Zone to understand why they happen, and why your choices matter.
AI learns from patterns in data. If that data reflects unfairness in the world, AI can repeat and amplify those patterns.
Notes for your report:
Why?
AI doesn’t decide to be unfair.
It copies the patterns it’s given.
Warning:
When AI repeatedly gets things wrong for some people more than others, the problem is in the data and design, not in the people.
You’ve probably seen chatbots or generators write long, confident answers.
Large language models (LLMs) generate answers by predicting the most likely next word, not by checking if it’s true.
Because of that, they can:
These made-up results are called hallucinations.
These weren’t just bugs. They’re “stochastic parrots”, systems that remix patterns without real understanding.
AI is trained to sound fluent, not to guarantee truth.
So
If an AI answer sounds super confident, your job is to ask: “How do I know this is true?
AI can now create realistic:
Used well, these tools can support creativity.
Used badly, they can spread misinformation and deepfakes.
Used carelessly, they can reuse someone’s ideas without credit.
Important detail to highlight in your report:
The more realistic AI-generated media becomes, the more important media literacy and critical thinking are.
It’s important to ask questions like:
AI didn’t invent lying, but it gave lying powerful new tools.
That’s why your skills in spotting tricks matter more than ever.

Even when AI isn’t “attacking” anyone, it can quietly weaken how we learn.
When AI starts doing all the writing, solving, and explaining…
AI also tends to generate average or “safe” responses, because it copies common patterns:
Incoming report from Earth Headquarters:
Think of AI like a powerful calculator for ideas.
Great when you already understand the problem.
Dangerous if you let it do all the thinking for you.
AI isn’t just floating in “the cloud.” It has real-world costs.
When you use AI, you’re not just using a “magic brain.”
You’re using electricity, water, and people’s time/ideas.
Being responsible means remembering those invisible costs.
AI does not affect everyone equally.
Some risks you need to note in the report:
Imagine an AI trained only on stories from one city or culture.
Would it really understand your community? Your land? Your language?
If not, that missing knowledge is a risk, not just an inconvenience.
When you pull all the clues together, a pattern appears across Digital City:
But these findings don't signal doom. They signal opportunity.
AI isn’t “the villain of the future.”
It’s a powerful tool that can help or harm, depending on how people use it. Understanding its risks doesn’t mean you walk away.
It means you walk in with your eyes open, ready to ask questions, notice patterns, and speak up when something feels wrong.
Reflection Log
Thinking through harm, fairness, and what humans must do to stay in control.
Next Mission