I have been thinking a lot about information quality, lately. It is at the heart of a research initiative I am working on at the moment, and which I presented earlier this month at the Academy of Marketing conference, in Southampton – I promise to write a blog post about it and to upload the slides, soon.
It is a critical issue in an age where anyone can become a publisher because there are so few barriers to posting information online. As consumers of information, this democratisation of publishing is good because it gives us access to a broad range of sources and angles. But it can also be disastrous when the information we read and act on is wrong.
What goes wrong?
Sometimes mistakes are made at the source, due to incompetence or to malice. For instance, the twitter account @ShellIsPrepared is a parody on the oil company Shell, though some users seem to think it is an official account. For an interesting analysis on this and other misleading campaigns and accounts read this post by Scott Stratten over at UnMarketing.
Other times, the information is correct at the source, but it is taken out of context or interpreted wrongly, as I discussed in this post.
Traditionally, we have looked at the editorial and peer review process as a validation process. We implicitly trust intermediaries (like newspapers or academic journals) to verify the quality of the source or the data being presented. That’s why we pay for it, why we aim to be published there, or why we cite these sources over blogs or wikis. But what happens when you can not trust the source, either?
When the gatekeepers can’t separate the wheat from the chaff
Two stories gave recently caught my attention because they exposed quality control issues among respectable sources.
The first one is about speed. You see, quality control takes time – and speed seems to be the key word in the current technical environment. Unfortunately, there are alarming signs that, in the quest for speed, some gatekeepers are letting the guard down. Forbes had a fascinating story about reputable news companies like the New York Times or Reuters quoting sources without checking facts or the sources credentials. The article reveals how reporters use the Internet to cut corners – i.e., to quickly (and, I guess, inexpensively) find information that could be obtained face to face or through a series of checks, but which would take much longer to get. The article blames this practice on a ‘quest for traffic and eyeballs’ and considers the implications it may have for ‘the trust a reader has in a publication’.
The second story is about the process. Two highly regarded academic journals retracted academic papers because of self-plagiarism, errors and methodological failures. You can read about it here and here. Publishing in such journals is a very slow process because other experts review the paper to try and identify quality problems in the approach or the analysis. Most papers get rejected; either on the first round or after a series of ‘revise and resubmit’ iterations. The result of this process, though, is that when a paper finally gets published, it is deemed to be trustworthy… because readers trust the source.
These two stories show clear limits to the ability or motivation of information gatekeepers to filter out bad quality information. So, if we can not trust the gatekeepers, what other clues can we use to assess the quality of online information?
How do you decide whether or not to trust something you see online?