Social networks can popularize unknown products, make ordinary people celebrities, or influence election results. They are a means used for many harmless marketing activities. However, their operation also provides a very effective way to abuse them for spreading half-truths and conspiracy theories. To understand why this is the case, let’s take a closer look at the world’s most popular social network; Facebook. Studies have shown why following Facebook’s newsfeed is different from researching and verifying facts on multiple websites ourselves. This allows us to understand the process of effectively spreading hoaxes. Finally, it’s interesting to see how Facebook programmers decided to fight against this phenomenon.
We need to realize what makes the Facebook newsfeed unique. If we browse news through our account, we directly encounter artificial intelligence, which currently consumes significant resources for its development. It’s responsible for the fact that we don’t see ads, friend’s statuses, and events we participate in randomly. The system presents news in the newsfeed specifically based on the user’s behavior. We provided basic information about ourselves in our profile: age, gender, occupation, and perhaps a few other details.
However, that’s not all. The software tracks our entire activity, including visited profiles, liked pages, and with whom we chatted. It also monitors factors such as the length of time we spend reading an article, the type of information we engage with the most – image, article, status, video, etc. Based on that, it prepares a list of news specially tailored for us. It shows us everything that matches our online behavior and the behavior of our favorite friends with whom we interact the most. Essentially, everything we already agree with, which causes our mind to continue to peacefully relax in the bubble of our own worldview.
There may not necessarily be anything wrong with that. If someone is a conservative or a liberal, they receive their version of views on current events in society. What a believing person sees on their screen and what an atheist sees, perfectly reflects their values. So far so good, but what if someone has developed an interest in conspiracy theories?
Let’s imagine the dashboard of someone who has fallen for stories about secret powers controlling this world. They search for and like articles with such ideas. To this, we add a list of their friends. They chose them themselves, of course. There is, therefore, a high probability that a few of them are paying attention to the same direction and occasionally share something on a similar topic. As a result, Facebook’s artificial intelligence presents them with other ideologically related information. Such a user is satisfied because they do not feel in conflict with society, and the vast majority of (available) news perfectly fits into their world.
The study described in the New York Times shows that if an article or news item is shared by one’s friends, it is even less critically received. All marketers will tell you that the best way to sell a product is through a recommendation from someone you trust. The study shows that this kind of selling of lies happens on social media as well. And it happens on a large scale. Users usually just scroll through a multitude of statuses and headlines of shared articles, all at once. They don’t examine further details or check sources. They increasingly and comfortably reside in their own bubble.
An even bigger problem, however, is that information consumed in this way from seemingly trustworthy (friends) and public (liked pages) sources is stored in one’s memory as facts. In further discussions, this information is then used as verified, even though no process of verification has occurred.
If a person operates in this cycle for a long time, not only will they fall prey to lies and half-truths, but if we try to confront them with different opinions, they will begin to exhibit so-called “motivated reasoning”.
In short, this means that they will only seek out things that further confirm their view of reality. And if someone logically presents opposing arguments, they will reject them and label other sources as untrustworthy. They are highly motivated to do so, as they want to protect their virtual space.
Scientists Letcia Bode and Emily K. Vraga conducted an experiment to study not only how misinformation spreads, but also how it can be corrected on social media. They first determined the participants’ reactions to a specific hoax, specifically studying people’s reactions to articles on the harmfulness of genetically modified foods. They then created several types of false articles related to the misinformation, including ones that confirmed the hoax and ones that corrected it. The articles that disagreed with the hoax were written in a similar style and presented arguments closer to the truth, with the goal of discreetly disrupting the misinformation bubble. Participants in the experiment rated these articles based on how much they trusted them. The expected effect of motivated reasoning was observed, as articles that confirmed participants’ beliefs received the highest scores. However, articles that corrected the original misinformation also received surprisingly high scores.
This study shows us how to fight misinformation on social media. Facebook itself has recently begun to combat the spread of hoaxes. Therefore, it is possible to fight conspiracy theories, and the key to success is to provide users with corrective information in a language and style they can understand.