Social media content is pushed at us by algorithms trying to keep us engaged with the platform. And, because sensationalist content tends to hold our attention more than non-sensationalist one, our feeds often get polluted by disinformation.
Disinformation is a serious problem for society, leading many to look for ways of increasing our alertness to fake news and, thus, our ability to detect and resist it.
One approach that is being considered by some policymakers is to require social media platforms to disclose when an article has been recommended by AI. The rationale behind this idea is that by making it explicit that a recommendation has been automated (as opposed to curated by a human), would encourage us to be more critical about the content that we are consuming and, thus, help us detect disinformation.
Hanzhuo (Vivian) Ma, Wei (Wayne) Huang and Alan R. Dennis have been testing this idea. Among other things, the research team conducted an experiment where each participant was exposed to 8 news articles like on a Facebook feed: 4 with a notice that they had been recommended by AI, and 4 without that notice. Moreover, half of those articles were true, and the other half were fake news.
The team ran the same test for recommendations by an expert, and recommendations by a friend. Here is an example:

As expected, the team found that the expert recommendation increased the believability of news articles, more so for true stories than for fake ones. The effect increased with the perceived ability of the expert.
In turn, the AI recommendation significantly reduced believability in the true news articles. However – and unfortunately, as far as the goal of this research is concerned – the AI recommendation had no effect on the believability of fake ones.
Moreover, the authors found that while belief in the accuracy of an article increases sharing intentions, in general, this is not the case when the article was recommended by an AI. In that case, the leading factor in deciding whether to share the article was alignment with prior beliefs. Scientists call this the confirmation bias.
The authors conclude that:
“(T)he net effect of labe(l)ling stories as recommended by AI has no effect on fake news, harms belief in true news, and makes the decision to share articles depend more on whether the article aligns with prior political opinions than considerations about whether the article is true or false. These are not the effects that policymakers hope…” (p. 617)
These and other results are reported in the paper “Unintended Consequences of Disclosing Recommendations by Artificial Intelligence versus Humans on True and Fake News Believability and Engagement“, published in the Journal of Management Information Systems.
This is a disappointing but important result, particularly in this age when generative AI makes it so cost-effective to produce and disseminate disinformation.
Back to drawing board, then.
