Echo Chambers

The Unseen Toll of Social Media on Democracy

December 4, 2016 - 5 minute read -

If liberty means anything at all, it means the right to tell people what they do not want to hear. – George Orwell

The recent rise in controversy over the role social media platforms like Facebook play in elections has given rise to a new debate about who should have control over the information people see online. Until recently, these platforms served as unfiltered mediums of information exchange, but as companies like Facebook to feel pressure from investors to be profitable, they have started creating echo chambers that use machine learning to build custom made feeds designed to keep users engaged. These feeds act as a self reinforcing loop, showing the user articles that affirm their current beliefs and fail to show articles that might contradict their opinion. This creates a dangerous cycle where whole societal groups are shielded from what they don’t want to hear, allowing large groups of people to, over time, drift drastically apart on the political spectrum, causing misunderstanding of the other side. These social echo chambers are deepening a partisan divide in the United States and around the world, cultivating fear and misunderstanding, spreading hate and animosity, ultimately threatening the freedom of speech that is integral to our democracy.

How did we get here? Before Facebook became one of the world’s largest media distributors, they served as the internet’s home for connecting with your friends. Their purpose was innocent enough. Then, people realized that one of the most effective hooks on the internet was the need for social approval. Like any other two sided business, Facebook fights to create value for their customers, the companies in need of advertising, by pulling it out of their users, the people using the platform to connect with their friends. In Facebook’s advertising business, the more clicks on advertisements, the more money they make; this translates directly into user engagement, meaning the more time users spend on their platform the more ads the will see and click on. Facebook makes money by selling attention.

Companies like Google, have been profiting by selling attention for decades. Every time you make a search query and click on one of the advertising links, Google makes money. They use machine learning algorithms based on your previous search history to figure out which advertisements you are most likely to click on. The interests of Google and their users are aligned; both parties want the user to be shown the most relevant search result. Facebook uses a similar approach, except their goal is to keep users engaged so that they can view more advertisements. Out of all the content their users subscribe to, Facebook uses a special machine learning algorithm for solving the contextual multi-armed bandits problem. This algorithm is designed to maximize user engagement by learning which posts a user likes the most. People are most likely to engage with content that affirms their current beliefs. The algorithm learns to show users content from news sources that continue to reinforce their opinions. For Facebook, this has been a massively successful strategy, engaging users for 50 minutes a day on average, pushing Mark Zuckerberg’s net worth close to 50 Billion US dollars.

The information we digest has a massive impact on our psychology. Studies into the mere-exposure effect have shown that as a person is exposed to an idea multiple times, they are more likely to be attracted to that idea. In the case of Facebook, this creates a self-reinforcing loop; engaging with a post makes a user more likely to engage with similar posts, causing Facebook’s algorithm to show that user more posts like it. As a user sees more of those posts, that user is more likely to agree with the general opinion being expressed. When a user who likes Hillary Clinton sees are article that says she is guaranteed to win the presidency, they will engage with that article, causing Facebook’s algorithm to show that user similar content. Conversely, when a user who supports Donald Trump sees an article that says he will bring jobs back to the United States, the same thing happens, reinforcing their belief. When many users are taught to believe that Hillary Clinton will win the presidency, the probability of those users going to the polls is decreased, resulting in lower turnout. The same effect goes for Donald Trump supporters who are concerned about jobs going overseas.

In the graphic below, it is noticeable that right around the time Facebook starts to move towards being a public company, the partisan divide grows significantly. This was likely around the time they started to boost their user engagement by using custom machine learning algorithms to produce content that appeals to each individual user. Another unseen impact of social platforms is that a user only hears the opinions of their friends, which normally express the same general political opinions as that user.

Sample image

Photo credit Washington Post, added years when Facebook was launched and when it went public: Notice how around the time Facebook went public, the divide widens significantly. This was around the time they started using machine learning to curate and monetize the news feed.

The consequence of these algorithms have is that most people will rarely see content that disagrees with their beliefs, meaning they will never hear the other side of an argument. The root of misunderstanding is an inability to listen to other perspectives. When those perspectives are never heard there is no possible way they can be understood. In the age of digital media, we have created systems capable of covering up entire viewpoints, and when covering up a viewpoint is most profitable, these systems will. As machine learning algorithms are increasingly deployed to decide what users see and don’t see for larger portions of the population, they are polarizing the political beliefs of large groups of people.

Recently, we have seen this divide grow even larger causing major political upheavals in the United States, the UK, and elsewhere. Decreasing the possibility that this trend will be reversed is the fact that Facebook and companies like it aren’t going away anytime soon. Educating the public on the perils of these new types of information consumption has the highest potential to help bridge the current divide. The first step to fixing a problem is acknowledging it exists, and we are starting to see that acknowledgment begin in today’s news about fake news articles posted on Facebook influencing the election. Freedom of speech guarantees the right of the people to speak their minds, but more importantly, it guarantees the right to hear what is being said. It is important that we work to protect our right to freedom of information.

Since it’s inception, Facebook and platforms like it have served as mediums of social change. As Facebook has taken the role of the world’s largest news provider, it has pushed new boundaries of what is possible in the realm of customized news, creating echo chambers of political influence, deepening the partisan divide, and spreading misunderstanding. Digital technology, especially the current fad of machine learning to curate news, has outpaced our conventional political systems, causing major political upheavals in many areas of the world, changing how we perceive our leaders, ultimately threatening our freedom of speech. We must take notice as large media companies start to infringe on this core function of our democracy; our future might still depend on it.