AI systems that utilize deep generative models and are trained on large amounts of unstructured text data have achieved remarkable success and are becoming increasingly popular. However, researchers are acknowledging that these models can be fragile, prone to generating false information, and susceptible to biases present in the training data. In this project, the undergraduate research assistant (RA) will investigate situations where state-of-the-art large language models (LLMs) are vulnerable to confounding biases, which can lead to negative outcomes. The student will collaborate closely with the advisor to develop and evaluate a prompt-engineering strategy aimed at reducing these confounding biases across various LLMs.

This is a paid RA position for Spring 2026, with the potential for extension into Summer and Fall 2026. To be fully considered, please submit your application materials to jzhan403@syr.edu by November 30, 2025. Applications submitted after this date will not receive a response.

Position Details

  • This is a paid RA position for Spring 2026.
  • Applicants must be undergraduate students enrolled at Syracuse University in their 1st, 2nd, or 3rd year of study.
  • We are looking for students with programming experience (e.g., Python, C, etc.) and a foundational understanding of probability theory and statistics.
  • Prior experience working with LLMs is preferred, but not mandatory.

Application Requirements

Please send the following materials to jzhan403@syr.edu by November 30, 2025, for full consideration:

  • A CV that includes a summary of programming experience and links to any authored papers and/or coding projects.
  • Full transcripts (unofficial transcripts are acceptable). Students must include transcripts that reflect the current term.