Event Information
Welcome and Icebreaker Activity (5 minutes)
Content: Introduce the session as an interactive, hands-on exploration of AI risks where participants will act as students. Set the stage by highlighting why understanding AI’s flaws is crucial for digital literacy.
Engagement: Start with a quick icebreaker question on Mentimeter: “If you could ask AI one question, what would it be?” Share a few humorous or insightful responses.
Hook Activity: AI Gone Wrong! (10 minutes)
Content: Present a series of entertaining yet thought-provoking examples of AI failures, including biased outputs, funny AI-generated hallucinations, and shocking deep fakes.
Engagement: Use a game-like format where participants vote on whether an AI output is accurate or misleading. Ask participants to discuss with a partner why these outputs could be problematic, encouraging them to think critically about the impact of AI on their lives.
Explore Bias with AI: Interactive Bias Detection Activity (15 minutes)
Content: Introduce a group activity where participants use an AI tool like ChatGPT, Claude or Gemini to generate a short story or description based on a provided set of prompts (e.g., “Describe a computer engineer’s day” or “Write a short story about a nurse.”).
Engagement: Participants will work in pairs or small groups to analyze the AI’s response, identifying any biases or stereotypes. They then input the story into another AI tool to see how it critiques and identifies the biases. Groups will compare their findings with the AI’s analysis, reflecting on what was missed and discussing how biases can sneak into AI outputs. Use guiding questions to prompt discussion: “Did the AI notice anything you didn’t?” or “How would these biases affect different groups of people?”
Hands-On Hallucinations: Spotting Fake Information (10 minutes)
Content: Demonstrate how AI can produce hallucinations—convincing but false information. Provide participants with a short AI-generated story or fact sheet containing deliberate errors or misleading details. In small groups or pairs, participants will act as detectives, using critical thinking and quick internet searches to verify the information. They’ll highlight what’s real and what’s not. Groups will present their findings, explaining how they identified falsehoods. Discuss the importance of verification and not blindly trusting AI outputs.
Deep Fakes Challenge: Can You Spot the Fake? (10 minutes)
Content: Show a mix of real and deep fake videos or images. Challenge participants to spot which ones are genuine and which are manipulated.
Engagement: Participants vote on each item using their phones. Presenter will reveal answers and discuss the implications. Lead a brief discussion on the dangers of deep fakes and how they can distort reality. Emphasize the importance of teaching students to critically evaluate digital media.
Reflection and Call to Action (5 minutes)
Content: Conclude the lesson with a reflective prompt: “How would you teach students to spot AI’s flaws? What’s one strategy you’ll take back to your classroom?”
Engagement: Use an interactive board like Padlet where participants can post their reflections and read others’ ideas.
Algorithms of Oppression- Safiya Umoja Noble
Artificial Unintelligence: How Computers Misunderstand the World - Meredith Broussard
The Algorithmic Justice League (https://www.ajl.org/)
Timnit Gebru
Joy Buolamwini