UW News
Hayoung Jung
November 20, 2024
In the ‘Wild West’ of AI chatbots, subtle biases related to race and caste often go unchecked
University of Washington researchers developed a system for detecting subtle biases in AI models. They found seven of the eight popular AI models they tested in conversations around race and caste generated significant amounts of biased text in interactions — particularly when discussing caste. Open-source models fared far worse than two proprietary ChatGPT models.