MISSION 6
You’ve seen where AI lives in Digital City, how it learns from patterns, how it can help, and how it can go wrong in the Glitch Zone.
Now you step into a new chamber:
The Ethics Arena
also called the Decision Dome.
The AI systems hum quietly, waiting for instructions, but they don’t know what “right” or “fair” means.
AI can make predictions.
But it cannot choose values.
That’s your job.
This mission is all about ethics and human responsibility:
Today’s big idea:
AI doesn’t make moral decisions. People do.

Field Notes for Earth Command
Who should decide? What’s fair? Who is responsible?
Today’s big idea: AI doesn’t make moral decisions. People do.
Think of ethics as the “What’s the right thing to do?” part of technology.
Those are human concepts.
AI must always stay under human oversight.
If AI helps make a decision, humans must check that decision.
Ethics = humans thinking carefully about impact.
AI systems learn from the world.
But the world isn’t always fair.
That means AI can accidentally copy patterns that were already unfair, like:
This is why fairness is something humans must build:
AI won’t fix itself.
If humans don’t question and correct bias, it keeps spreading.
You deserve to know when AI is being used, what it's doing, and why.
Ethical AI use includes:
If an AI system is hidden, confusing, or impossible to explain:
If you can’t explain how a decision was made, don’t give it all your trust.
AI cannot:
Those are things only humans can do.
Human hold:
AI can support learning, creativity, or communication, but it can’t replace human judgment, the relationships we build, or the care we show one another.
If AI can assist but not understand, then every ethical scenario leads back to one key decision:
Should AI decide this... or should a human?
Across the resources you gather, responsibility keeps showing up:
Because AI will not:
That’s why you carry responsibility:
These choices protect your learning, your community, and your integrity.

Ethics is not one universal setting. It depends on culture, context, and community.
Ethical AI needs:
These values highlight relational accountability:
thinking about how each choice affects people, land, and relationships.
This raises an important question:
Who gets to define “right,” “fair,” and “good” for AI?
Whose voices are missing from that table?
Here’s the observation note for this mission:
AI can help you think, but it can’t decide who you want to be.
That will always be your job.
Reflection Log
Decide which decisions belong to AI, and which must stay human.
Next Mission