The journal Science recently published a paper exploring how and why fake news spread. You can access it here, or you can listen to this episode of Chips with Everything (a Guardian podcast that I recommended, previously).
The paper is authored by MIT researchers Soroush Vosoughi, Deb Roy and Sinan Aral. The researchers collected around 126,000 rumour stories on Twitter, which had gone on to cascade (i.e., be propagated through retweets) by other users. They found that false stories propagated much more broadly and much faster than true stories. This was true of all false stories, but particularly so in the case of false news about politics:
“False political news travelled deeper and more broadly, reached more people, and was more viral than any other category of false information. False political news also diffused deeper more quickly and reached more than 20,000 people nearly three times faster than all other types of false news reached 10,000 people”

Don’t blame the bots for the scale of misinformation doing the rounds on social media, though. While it is true that bots contribute to spreading news on Twitter, according to this article, they spread true and fake news at the same rate. This means that it is us, humans, who are to blame for making fake news go viral.
The researchers went on to explore the characteristics of fake vs. true news stories, to try and understand if there was something in the type of story that would help us understand why they spread so broadly and quickly.
Broadly speaking they found that fake stories were novel, and inspired fear, disgust, and surprise. True stories were not as novel, and they invoked emotions of anticipation, sadness, joy, and trust. These characteristics of the message are very important because we are attracted to novel stories. Novelty helps us keep up with a changing world and, possibly, stay ahead of others. Hence, it is not surprising to see that stories that come across as having get our attention. Moreover, sharing novel information may make us seem more interesting, knowledgeable or valuable to others, which encourages the spreading of those stories. In turn, active emotions like fear or disgust provoke a stronger (re)action in us than passive ones like sadness. There is ample evidence that people and organisations intent on creating social upheaval tap into these emotions. For instance, Channel 4 filmed Mark Turnbull of Cambridge Analytica (the company involved in targeted campaigns for Trump and Brexit, among many others) saying to a potential client:
“The two fundamental human drivers when it comes to taking information onboard effectively are hopes and fears and many of those are unspoken and even unconscious. You didn’t know that was a fear until you saw something that just evoked that reaction from you. And our job is to get, is to drop the bucket further down the well than anybody else, to understand what are those really deep-seated underlying fears, concerns. It’s no good fighting an election campaign on the facts because actually it’s all about emotion.”
You can watch Chanel 4’s story below. The quote above comes around minute 7, but I recommend watching the whole excerpt,
In summary, while digital technology may help spread fake news, it looks like we are the real culprits, here, due to our need for novelty and our visceral reactions to those stories’ headings and content.
That is a very interesting blog post with some useful information. I agree with the conclusion that it is down to how humans react to the fake news. I think that the fact these fake news ‘inspired fear, disgust, and surprise’ has to be taken in a cultural context. There are examples where experiments carried out in the West have been replicated in the East and although the main findings tend to agree with the Western experiment the details are different. So, if the experiment was repeated on the Chinese equivalent, Weibo, I am sure that the conclusion would be the same however the emotions ‘inspired’ may be different. We may never know the true validity of my observation because the Chinese employ many people to write ‘politically correct’ posts so that the State can control what people read. Taking this bias into account will probably be too difficult in any analysis of fake news posts on Weibo.
LikeLike
Is it a case of the emotions being different? Or is it about different topics / images being used to try an instil the same emotions (i.e., fear, etc)?
LikeLike