CHARLOTTESVILLE, Va. - November 16, 2022 - (Newswire.com)
Researchers from University of Virginia- Darden School of Business, Boston University, and University of Regina conducted a series of studies with over 4,000 participants showing that people tend to believe news written by AI less than news written by people. The research is described in a paper titled "News from Generative Artificial Intelligence Is Believed Less" and published by FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency.
One of the most promising applications of Artificial Intelligence (AI) is the ability to produce textual, visual, and auditory content with little to no human intervention. Now, AI is capable of producing content—a fictional story, a poem, or even a programming code—virtually indistinguishable from text written by a person.
This kind of generative AI might be more pervasive than you think. Even the news one reads every day might have been written by neural networks; most likely, a good proportion of it is. Today, AI algorithms such as The Washington Post's Heliograph, Bloomberg's Cyborg, and Reuter's Lynx Insight automatically report on financial markets, crimes, sports events, politics, and foreign affairs.
But how do people perceive AI-generated news? Chiara Longoni (Boston University, Questrom School of Business), Andrey Fradkin (Boston University, Questrom School of Business), Luca Cian (University of Virginia, Darden School of Business), and Gordon Pennycook (University of Regina, Hill/Levene Schools of Business) explored this question. Two large experiments conducted with over 4,000 participants representative of the U.S. in terms of age, gender, ethnicity, and geographic region showed that people tend to believe news from AI reporters less than news from human reporters. That is, when news headlines were tagged as written by an AI reporter, people believed the news headlines less than when news headlines were tagged as written by a human reporter. This effect manifested irrespective of whether the news was true or false: When news items were tagged as written by AI (compared to a human), people were more likely to incorrectly rate them as inaccurate when the news items were actually true, and more likely to correctly rate them as inaccurate when they were indeed false. Participants also trusted AI reporters less than human reporters. In a nutshell, disclosing the use of AI led people to believe news items less, a negative effect explained by lower trust toward AI than human reporters.
These results are yet more evidence that getting people to trust AI-based generative services is a significant hurdle these systems must overcome to gain traction. A deeper understanding of public perception of AI-generated text will be critical as the use of AI algorithms to generate content grows.
Contact Information:
David Hendrick
Associate Director, Editorial and Media Relations • Communication and Marketing
[email protected]
Press Release Service by Newswire.com
Original Source: University of Virginia- Darden School of Business, Boston University, University of Regina Release Research: News From Artificial Intelligence is Believed Less