Site icon The Babbling Beaver

MIT researchers determined to teach large language models wokespeak

large language models wokespeak

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are determined to make sure that programming AIs like ChatGPT incorporate “logic” that rids them of heretical biases absorbed from training on real-world data. Misinformation that there are only two sexes, that racism is not systemic in every institution, or that chopping body parts off disturbed children is not proper gender affirming care must be expunged.

Instead, models will be imprinted with biases dedicated to the creation of an ideal world. Experts assure us that this can be done by optimizing “fairness.”

“Fairness was evaluated with something called ideal context association (iCAT) tests, where higher iCAT scores mean fewer stereotypes. The MIT language model had higher than 90 percent iCAT scores, while other strong language understanding models ranged between 40 to 80.”

“While we may still be far away from a neutral language model utopia, this research is ongoing in that pursuit.” Potential applications include rewriting children’s stories, correcting history books, and editing archived newspaper articles. With language constantly evolving to keep up with the times, newspeak social justice will never be done.

Story suggested by the MIT Daily News Office

Exit mobile version