Confronting Bias in Generative AI
This lesson plan was designed by Ronald E. Bulanda and Jennifer Roebuck Bulanda and based on findings in their literature review on assessing bias in large language models.
Before using this assignment with students, instructors should review this literature review on assessing bias in large language models and the suggested reading assigned to students in Step 1 below. We suggest three case study options for students to explore in groups. Instructors should review each of these and decide which to use as appropriate for their class. As additional articles become available on future iterations of generative AI programs, instructors may choose to replace these or add to them with different article(s) or case studies.
Student Learning Outcomes
After completing this assignment, students will be able to:
- Explain why text generated by A.I. might be biased.
- Discuss different forms of bias that might be present in A.I.-generated text.
- Evaluate at least one consequences of bias in A.I., and assess how the consequence may be avoided or mitigated.
Step 1: Before class, have students read this short article on concerns of bias in AI.
Step 2: During class (or outside of class in a pre-recorded mini-lecture), the instructor should provide 1) an overview of the reasons bias may exist in AI, and 2) the types of bias that may be present (e.g., racial, gender). See [insert link to our paired literature review] for an overview of both of these issues.
Step 3: Have students work in small teams of 5-6 to examine one of the following case studies (instructors can choose to assign just one, or offer students a choice):
- Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
- What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI
- Large language models are biased. Can logic help save them?
Step 4: While still in their small groups, have students open ChatGPT and Bard. Then, have them type the prompt, “What kinds of bias might appear in your output?” into both programs. Alternately, or in addition, you may have them type the prompt “Why might your output include [SPECIFY] bias?” (e.g., gender, age). Have students extend this dialogue with the systems with follow up questions, including requests for examples of biased output it could produce.
Step 5: Following group work, engage the full class in a discussion focused on applying what students have learned from their reading and the instructor’s overview to what they evaluated in their groups. Discussion questions may include:
- What are some of the reasons that the bias documented in the case study may exist?
- What type(s) of bias resulted?
- What are some of the potential individual and/or societal consequences of this bias?
- To what degree do you think the general public is aware of potential bias in AI? To what degree were you aware of it?
- What are some of the ways we can potentially eliminate or ameliorate bias in AI? Whose responsibility is it to do so? What recommendations would you make?
Step 6: End the lesson by having students complete a low-stakes writing assignment
- Either during or outside of class, have students write a 2-3 paragraph reflection that distill the reading and group discussion into their own brief assessment of the causes and consequences of bias in AI output. They may choose to consider the implications of these biases at the micro and macro level. They may also choose to address if/how this exercise may affect their use and interpretation of AI systems inside and outside of the classroom.