According to an article published in the journal Science entitled “Science, New Media, and the Public” by University of Wisconsin-Madison researchers Dominique Brossard and Dietram A. Scheufele, blog comments and the autocomplete suggestions provided by search engines (such as Google Instant results) may produce a psychological bias that alter the way people interpret and make sense of scientific information on the Internet.
As mentioned by Brossard and Scheufele, non-traditional sources of information such as blogs, have become a primary source of information about science for people; with around fifty-percent of Americans relying on them instead of other online sources. Additionally, ninety-percent of Internet users in the United States depend on search engines to discover this information.
Considering these numbers and the importance of scientific knowledge to society, an obligation may fall upon online publishers to make sure that their readers are presented with accurate information, and are subject to as little bias as possible. It therefore may behoove publishers reporting on science to take the necessary actions to mitigate the bias their readers will face by disabling comments, providing an alternative means of discourse, or altering other site features.
Sources of Bias: Search Engines, Social Media, & Blog Comments
Even though the Internet and blogs have enabled the democratization of information about science (and has been linked to positive attitudes toward it as well), there are also negative implications of these non-traditional media sources. The research mentioned in the article indicates that blogs and similar forms of online media result in biases that can have an effect on both sentiment toward related issues they present, as well as the interpretation of them.
Search Engine Bias, Search Suggestions:
Google’s autocomplete feature (called Google Instant) suggests search queries based on the popularity of those searches. The user starts typing a word and the search engine provides keyword suggestions for completing the search. It is often the action of the user to modify their original search with these suggestions, thus creating a vicious loop reinforcing the popularity of the suggested search terms.
In addition to the potential for bias created by the search suggestions reinforced-popularity loop, as the article explains, these suggestions pose another clear avenue for bias to form when it comes to people seeking information about a particular topic:
“There are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find. As a result, someone’s initial question about a scientific topic, the search results offered by a search engine, and the algorithms that a search provider uses to tailor retrieved content to a search may all be linked to a self-reinforcing informational spiral in which search queries and the resulting Web traffic drive algorithms and vice versa.”
The search suggestions in this case, create a bias in which the searcher’s original search intent differs significantly from what they actually end up searching for, and therefore, the information they read and to learn about.
A good example of the path to this bias is represented with a search for “global warming” on Google. As you can see from the image above, one of the suggestions for global warming is “global warming hoax”. In this case, a person looking for information about global warming becomes primed to the idea that the issue of global warming being a hoax because he is shown that search suggestion. Furthermore, the person may decide to actually follow-through with the hoax search. He is then presented with convincing, but false information about global warming created by conspiracy theorists and not scientific facts. The bias created by the search suggestions has now been reinforced.
Search Engine Bias, Personalized Search:
Not explicitly discussed by Brossard and Scheufele, personalized search technology represents another source of bias created by search engines. It has been a part of the Google algorithm since 2009 and a part of Bing’s since 2011, resulting in each individual user being served different search results based on criteria such as the person’s previous search history, social network activity, and physical geographic location.
Personalized search results create a predicament where a person may be presented with webpages, that although are more relevant to that person, do not represent information (scientific or otherwise) that is most universally important or that exists outside his own pre-existing perspective. This concern is well expressed by Eli Pariser in his book (and in the TED talk below), The Filter Bubble, where he makes the point that personalized search creates a bias that may ultimately lead to people being unaware of viewpoints and opinion outside their own personalized “filter bubble”. The filter bubble is not only limited to search, but also exists within the world of social media.
Social Media Bias:
Social media can end up creating a relevancy bubble, similar to the one created by personalized search. It is not uncommon for people to become aware of news, current events, and other information solely through social networking platforms. Information that is shared on social networks often comes from peers and like-minded-individuals, resulting in viewpoints that are constrained to certain demographics.
Another potential source of bias involves social media in relation to blogs. There are often visual cues such social sharing badges on blog posts that display things like the number of times an article has been tweeted and shared. These visual cues are indicators of popularity, but often subconsciously affect a person’s perception about authority of the writer and the quality of content. This is because people are susceptible to a psychological phenomenon known as social proof, a type of conformity. In this case, the presence of social sharing numbers leads people to believe that an article has authority, with information that can be trusted, simply because others have deemed it so. The underlying reason is we assume that other people, represented by social sharing numbers, possess more knowledge that enables them to assess these qualities.
Blog Comment Bias:
Perhaps the most interesting finding presented by the article was the bias the blog comments create about science news. A national experiment involving around 2,000 people (the study has yet to be published in full), participants were asked to read a neutral-positioned, balanced, online news story about nanotechnology as emerging science. The blog comments that followed the news story were manipulated and participants were either exposed to comments that were of a civil or uncivil nature (differing in tone).
The findings of the study indicate that the “readers’ interpretation of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments”. The version of the story with uncivil comments (included name calling and other rude behaviors) resulted in an interpretation of the story that amplified the concern of nanotechnology’s potential risks, despite not representing the main point of the news item.
This research highlights a legitimate dilemma about whether comments should be enabled on science related content. Blog comments may lead the readers’ sentiment and understanding of the information presented astray, but excluding them may also have a negative result.
The famous British philosopher and economist John Stuart Mill wrote in On Liberty (1869) that a person “is capable of rectifying his mistakes, by discussion and experience. Not by experience alone. There must be discussion to show that experience is to be interpreted,” and that “very few facts are able to tell their own story, without comments to bring out their meaning”. Mills makes a good point that can be applied to this dilemma, arguing in favor of blog comments. Blog comments as a means of discourse may open the door for biases, but they can also help people better understand information, develop new ideas, or abate pre-existing misconceptions.
Ideas for Limiting Bias
The most obvious solution to the dilemma of blog comment bias is to disable the publication’s comment system, if only just for a certain post where a bias might be a greater problem. As previously noted however, enabling discourse may be just as important as preventing bias. Below are some suggestions for facilitating discourse while reducing the aforementioned cognitive biases:
Separate Comments from Article Page
One option for reducing blog comment bias would be to detach comments from the article. By placing comments on a page separate from the article, we are enabling discussion (and the positive benefits associate with it), but not subjecting everyone to the biases they create. Although this strategy wouldn’t entirely remove the bias caused by blog comments, it may reduce its occurrence since they would not be immediately visible.
It is worth mentioning that some of the more traditional online publications already separate comments from articles. The Wall Street Journal for example, places the article and comments in separate tabs on top. It would be beneficial if blogs and other non-traditional online publications also used this strategy.
Hiding Social Sharing Numbers
A simple solution for removing some social media bias is to use social sharing buttons on an article that do not display number. There are official versions of the Facebook Like/Share buttons, the Twitter Tweet button, and the Google Plus +1 button that do not indicate the number of shares. The current times necessitate social media, but it does not have to come at the cost of bias. In fact, it may be a good means of minimizing the bias created by blog comments.
The only drawback to this solution is that it eliminates social proof, which is actually beneficial for promoting a blog. A blogger must make the decision of what is more important, reducing bias or promoting their website.
Generating Discussion through another Medium
Discussion often happens outside of a blogger’s CMS on social networks, forums, or other blogs. It may be an option to disable a blog’s comment system in place of encouraging conversation on social media.
This solution would be one of the easiest to deploy. All you would need to do is disable your comment system and place a hyperlink at the bottom of a post explaining to users why you have disabled comments and to discuss via social media. For Twitter, it may be wise to provide a hashtag to center conversation around. Although this may not result in as much discussion as if you were to keep the comment section activated, it may have a positive benefit on a blog’s social media presence since there would be more sharing taking place.
It would be an excellent to see this played out on a publication with a large amount of readers, as that is the kind of blog that would probably best suited to test something of this nature.
Develop an Alternative Comment System
Although it may be more difficult to implement than my other suggestion, it may make sense for certain publications to develop their own comment system. A comment system that displays certain demographic data below comments (the user would have to fill out a profile), may help people understand the viewpoint of the commenter better and reduce bias. This way a comment from an arch conservative or a radical liberal can be identified as such, as their viewpoint may not align with the readers.
However this also creates a predicament as an argument can also be made that this might actually increase bias, especially toward certain comments and their authors. The effectiveness of such a comment system would require an empirical study of its own, which may not be practical since the comment system would need to be developed first.
Encouraging Readers to Disable Google Instant
There is very little that can be done to eliminate any bias created at the hands of the search engines. A blog may want to encourage its readers to disable autosuggestion features such as Google Instant. In case anyone is interested, this can be achieved in Google by clicking the cog on the top right of the page, then “search setting” and select the radio button that says “Never show Instant results”. Click the blue “save” button to disable Google’s search suggestions. Ultimately, this is really up to you and your readers. I keep search suggestions enabled, as I believe that by being aware of things that might alter your perception, you can reduce their effect.
Conclusion
Search engines, blog comments, and social media all create biases that effect how we perceive information on the internet. Research has demonstrated this to be a particular issue within the sciences. It is probable that the same biases would be present for publications for other disciplines and it would be interesting to see this issue investigated further.
Regardless of the effectiveness of the suggestions I have made for reducing these biases, they are not all practical solutions. These suggestions come at the expense of factors that would otherwise have positive benefits on a blog’s popularity, search engines optimization, and social media presence. Since it is the goal of many blogs to increase their internet footprint, bias reduction is typically not a concern for publishers and may in fact be welcomed.
Do have any other ideas for limiting reader bias on blogs, social media, or the internet? Let me know in the comments below. I apologize in advance for any bias this may cause.
Update (9/24/2013): Popular Science has made the move to disable their comments, citing the above mentioned study as a reason.
Thank you for sharing this post. I enjoyed reading it. Thank you for the tips and information.
I agree that there are inbuilt mechanisms that skew the results, even thought we might not be aware of it – like the observation fallacy
I think that the hypothesis on which this theory is based is inherently flawed. If someone goes to a search engine with a specific search in mind, they will ignore the autofill suggestions that Google poses, if the suggestions are not on point. Unless the autofill correctly identifies the search query, the user will continue with their train of thought. However, not surprisingly, the search engine will nail the query quite frequently. The autofill search suggestions are borne of statistical data over thousands of searches. The search engines actually provide a source of data far superior to that developed by anyone who does not enjoy a similar base data set. So sometimes Google anticipates correctly.
Where the bias occurs, is in the results pages. Since the Penguin and Panda updates, social media has become more of a factor than other SEO strategies, and if you don’t have a presence on the platforms the search engines prefer, you will never be ranked, even if your work is far superior to what is posted at the top of the results.
Hi Kenneth, thanks for dropping by and leaving such a substantive comment. We are going to agree to disagree on this one (absolutely nothing wrong with this). Psychological priming (http://en.wikipedia.org/wiki/Priming_%28psychology%29) in my opinion would likely be a factor when it comes to instant results. This is not to say that the suggestions aren’t good and predict your search intention. Its that the presence of other suggestions may subconsciously effect you in ways that you may not be aware.
I like this, it is quite interesting.
Great article about blog commenting.