Change display time — Currently: Central Daylight Time (CDT) (Event time)

LLMs as Correctors of Social Bias: Evaluating Varying Recommendations in Educational Guidance

,
HBGCC - Posters, Table 26

Lecture presentation
Poster
ISTELive Content
Save to My Favorites

Session description

After this session, attendees will have a better understanding of the current state of biases in AI, specifically in college and career recommendations. Drawing on new research, the session highlights AI’s progress toward fairness and its potential to advance equity—especially as traditional DEI policies face growing legal and political challenges.

Outline

Can AI Level the Playing Field?

Introduction & Background:
I'll start by introducing my research question: Can AI help us build a more equitable society? I’ll briefly explain what large language models (LLMs) are, share what past research has shown about their potential to reinforce bias, and highlight why it's important to examine whether today’s AI systems are advancing or undermining fairness.

Methodology:
I’ll walk through how I designed two experiments using 20 fictional student profiles with identical academic transcripts, varying only race and gender, and analyzed over 300 responses from four major LLMs to assess how they responded to socio-demographic differences.

Key Results:
I’ll discuss my key findings showing that newer models exhibited a marked shift toward fairness, and in some cases, even signs of overcorrection.

a. Newer LLMs showed statistically significant reductions in gender bias and improved output consistency compared to older models.
b. African-American female students were, on average, recommended higher-quality community colleges than their white peers, suggesting a possible overcorrectionーintentional or otherwiseーthrough interventions aimed at addressing historical inequalities.
c. Gender bias in career recommendations (measured via weighted salary) was significant in older models like ChatGPT-3.5 but was largely eliminated in newer LLMs.
d. Across both experiments, newer models appeared more fair and consistent.

Conclusion:
In the final part of my presentation, I’ll reflect on how my results show signs of progress toward fairness in AI, and what that could mean for using AI to promote equity, especially in education policy. I’ll also highlight the importance of continued experimentation as models evolve, and discuss the ethical implications of bias correction—how we guide AI to be fair without introducing new problems.

Q&A:
Finally, I’ll open the floor for discussion and questions from the audience.

More [+]

Supporting research

Guo, Y., Guo, M., Su, J., Yang, Z., Zhu, M., Li, H., Qiu, M., & Liu, S. S. (2024, November 16). Bias in large language models: origin, evaluation, and mitigation. arXiv.org. https://arxiv.org/abs/2411.10915?utm_source

Co-Intelligence: Living and Working with AI by Ethan Mollick

More [+]

Presenters

Photo
Student
Saint Stephen's Episcopal School

Session specifications

Topic:

Artificial Intelligence

Audience:

Counselor, School Level Leadership, District Level Leadership

Attendee devices:

Devices not needed

ISTE Standards:

For Education Leaders:
Equity and Citizenship Advocate
  • Model the safe, ethical, and legal use of technology and the critical examination of digital content.

Additional detail:

Student presentation