Skip to Main Content

Search for a Presentation

2026 Poster Session A

A59 - How Large Language Models Handle Gender Bias in Adjectives

Large language models (LLMs) generate human-like descriptions, but may reflect underlying social biases in language.

2026 Poster Session A

A59 - How Large Language Models Handle Gender Bias in Adjectives

Mentor: So Young Lee, Ph.D.

Large language models (LLMs) generate human-like descriptions, but may reflect underlying social biases in language.
Adjectives are a useful lens because they encode both sentiment (positive/negative) and gender associations (masculine, feminine, neutral).
Comparing GPT’s adjective choices to human judgments provides a way to evaluate how closely model outputs align with human interpretations.
This study focuses on alignment in sentiment and gender coding.

Explore Project