• Undergraduate RA Opportunity

    AI systems that utilize deep generative models and are trained on large amounts of unstructured text data have achieved remarkable success and are becoming increasingly popular. However, researchers are acknowledging that these models can be fragile, prone to generating false information, and susceptible to biases present in the training data. In this project, the undergraduate research assistant (RA) will investigate situations where state-of-the-art large language models (LLMs) are vulnerable to confounding biases, which can lead to negative outcomes. The student will collaborate closely with the advisor to develop and evaluate a prompt-engineering strategy aimed at reducing these confounding biases across various LLMs.