Activity

Understanding AI Bias

Grades 6-8, Grades 9-12
Subjects: Computational Thinking, Technology

Overview

This lesson plan is about understanding AI bias, which is when an AI tool makes a wrong or unfair decision because it learned from data that was not accurate or complete. AI stands for artificial intelligence, which is a technology that can do tasks that normally require human intelligence, such as recognizing faces, understanding speech, or playing games. Data is the information that is given to the AI to help it learn how to do these tasks. Sometimes, the data can have biases, which are unfair or incorrect views or judgments about people, places, or things. For example, the data can be missing some groups of people, or it can have stereotypes or prejudices about them. This can make the AI tool biased, too, and affect how it treats or interacts with people. In this lesson, you will learn how AI bias happens and how it can affect your life. You will also think about ways to reduce AI bias and make AI more fair and reliable.

NB Curricular Connections

Technology 6-8

  • Strand: Information Technology Skills – Big Idea: Computational Practice

Technology 9

  • Strand: Information Technology Skills – Big Idea: Computational Practice

Computer Science 110

  • Strand: Computation Thinking – Big Idea: Decomposition; Pattern Recognition; Abstraction; Algorithms
  • Strand: Coding – Big Idea: Planning and Documentation; Software; Data

What You’ll Need

Instructions

1. SayWhen computer scientists create AI, they use two different types of data: training data and testing data (Slide 4).
  • Training data is the information given to an AI to help it learn how to do specific tasks (Slide 5).
  • Testing data is the information used to check whether the AI that was created is reliable and accurate (Slide 6).
2. SayImagine we are computer scientists and we’re in the process of creating an AI tool. The purpose of the tool we’re building is to identify different types of fruits. We have some training data to help us get started (Slide 7).
3. AskBased on these examples of training data, what types of fruit might our AI be able to identify? (Slide 8)
4. Show Slide 9 and explain that the images here show examples of the testing data used to check if the AI is working properly. The labels under each image are what the AI thinks each fruit is called.

AskDo you notice any mistakes? Why do you think the AI is making these mistakes? (Slide 10)

5. Explain that the mistakes the AI made are an example of AI bias, which is when an AI tool makes a decision that is wrong or problematic because it learned from training data that didn’t treat all people, places, and things accurately (Slide 11).

Show Slide 12 and say: In the training data, apples were the only example of a red fruit. The testing data shows that the AI learned to identify anything red as an apple. In other words, the AI we created has a bias toward thinking that every red fruit is an apple (Slide 12).

6. SayWhat are some ways we could reduce the AI bias of this fruit detector? (Slide 13)

Invite students to share out, and then review the suggestions on Slide 14.

7. Say: While it’s almost impossible to completely eliminate AI bias from a tool, we can do our best to reduce it by coming up with as diverse and complete a set of training data as possible (Slide 15).
8. If time permits, read Slide 16 and have students work independently to come up with a list of image descriptors. Then, have them pair up to compare their lists and continue to add any additional image descriptors.

Review the descriptors on Slide 17 and continue to add to the list based on any other ideas the students have.

9. SayRemember that behind every AI tool are humans making decisions on what training data the tool will use. Understanding how AI bias occurs can help us think critically about its potential impacts (Slide 18).

 

Reflection Activity

Please see the attached PDF for several choices on how you and your learners can reflect upon today’s activity.

Acknowledgements

  1. Common Sense Media: https://www.commonsense.org/education/digital-citizenship/lesson/understanding-ai-bias