ChatGPT 3 validates misinformation, research finds

Large language models validate misinformation, research finds | Waterloo News.
New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation. In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.