Using Machine Learning as a Way to Think About Cognitive Bias
Explore and create : Creation lab
We carry around different cognitive biases and assumptions about the world around us. Explore a machine-learning tool that can label and classify visual data. By thinking how bias can affect machine-learning tools, we can in turn think about how our own biases impact our own thoughts.
|Audience:||Teachers, Curriculum/district specialists, Technology coordinators/facilitators|
|Attendee devices:||Devices required|
|Attendee device specification:||Laptop: Chromebook, Mac, PC
|Participant accounts, software and other materials:||None|
|Subject area:||Computer science, STEM/STEAM|
|ISTE Standards:||For Educators:
|Disclosure:||The submitter of this session has been supported by a company whose product is being included in the session|
|Influencer Disclosure:||This session includes a presenter that indicated a “material connection” to a brand that includes a personal, family or employment relationship, or a financial relationship. See individual speaker menu for disclosure information.|
Participants will be engaged in a full Project Based Learning module to explore concepts in Machine Learning. As learners, the participants will use several image recognition tools and tinker with it to try to deceive Machine Learning algorithms with clever images. While exploring the image recognition tools and by learning about algorithmic bias, participants will start to reflect on their own cognitive labels, classifications and biases. As teachers, the participants will learn to model best practices in exploratory thinking, tinkering, ethical design principles and exposing cognitive biases in a safe and engaging way.
Activity 1: Participants will start by engaging in a physical card sorting activity to sort random objects into labels and classes, mimicking how a Machine Learning algorithm would. Then participants will be given “joker” cards that seem to not fit in the labels or classes, requiring them to re-think their labels and classes.(20 Min)
Activity 2: Participants will then explore a simple image recognition tool that uses a webcam to classify everyday objects. Participants will discuss the accuracy of this tool and learn a bit more about how Machine Learning works by watching a short video. Next, participants will train a Machine Learning model using the same images from the previous card sorting activity and test the model on selected images. Just like in the card sorting activity, participants will try to deceive the algorithm with “joker images”. (20 Min)
Activity 3: Participants will explore a series of cognitive bias cards and discuss with their peers how the biases affect our thinking., This will extend to participants learning how biases affect Machine Learning Algorithms, by watching a short video by MIT Researchers on how some facial recognition algorithms have difficulty detecting facial features of brown skin. Participants will then reflect on how their own visual recognition process may have biases attached by exploring how the group labels certain images. (20 Min)
Activity 4: Taking on the perspective of teachers, participants will then be given samples of AI in Education resources, lessons from the MIT Media Lab, and BSD Education. They will be tasked to rate the Ethicality and Promotion of Diversity that each resource has. Then participants will share and discuss their findings. (20 Min)
Activity 5: Participants will be given time to discuss, reflect, ask questions and share.
MIT Technology Review has documented AI facial recognition bias and the research that the Algorithmic Justice League has done to understand the cause. https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/
The MIT Media Lab has also produced Ethics in AI curriculum for middle school students that will be shared in the session. https://www.media.mit.edu/projects/ai-ethics-for-middle-school/overview/