Author(s): Marina Koytcheva
Last week, US president Obama suggested that misinformation and an inability to distinguish between facts and false stories on the Internet media have a real impact on the outcome of important political processes, such as elections.
The statement came shortly after BuzzFeed showed that, in the run-up to the US presidential election, the 20 top fake news on Facebook had generated more engagement than the best-performing election news from 19 traditional media sources combined. CCS Insight acknowledges that, without access to the raw data, BuzzFeed's research could be questionable.
In the same week, UK newspaper The Guardian ran an experiment that showed that content published on Facebook has a real influence on people's actions. The newspaper recruited a small number of volunteers, supporting one of the two main US presidential candidates, and subjected them exclusively to the Facebook News Feed of a purpose-built fictional persona of the opposite political persuasion. As a result, some of the volunteers changed their minds about which candidate they would vote for.
All these pieces together suggest that Facebook is a powerful factor in politics and that fake news pose real risks. The social network has since announced various measures to tackle fake news, including better ways to detect misinformation by working with fact-checking organisations and learning from traditional journalists. This whole story has complex implications.
Of course, a reasonable adult knows that the Internet is full of misleading information and fabrications. But Facebook is not the average Internet page. Given that many stories in the News Feed come from users' friends, whom they tend to trust, they are inclined to believe the content, especially if it's presented as news. Most people understand that an inspirational quote about the power of the Internet can't be attributable to Einstein.
In addition, people often trust stories on Facebook because they're what they want them to be. The platform creates a different and distorted picture of the world tailored to each person's interests. Friends often share a common background and values, and therefore post stories that are easy to relate to. Many people aren't necessarily aware of the bias of the News Feed. Unlike a newspaper, which typically has a clear political position, Facebook is open to all political views. But people don't necessarily get to see all those diverse views. The content they see reinforces their views and beliefs, giving them no good reason to question it. People need a pointer to help them distinguish true from false, and many expect Facebook to do that job. Facebook is now walking a very fine line between allowing censorship and allowing misinformation.
The social network has made its choice. Last week, CEO Mark Zuckerberg said: "We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible". Facebook has proven its place as a platform where political movements and even uprisings have been organised, and has given a voice to those who fear oppression and discrimination. For example, in July 2016, a black American woman live-streamed the shooting of her boyfriend by police.
Facebook has always censored some posts. On one hand it's a social network, but on the other hand it's a large digital advertising company, which has generated almost $24 billion from advertising in the last four quarters, and it needs to guarantee a certain level of "cleanness" to attract advertisers. In the past Mr Zuckerberg has insisted that Facebook is a technology company and if any posts are disallowed, it happens on a strict, predefined set of rules. But life is more complex than software algorithms. In September 2016, Facebook disallowed, on the basis of nudity, a post by the Norwegian prime minister that featured a famous photograph of a Vietnamese child running from a napalm attack — the decision led to a massive public outcry.
It seems that Facebook needs a larger involvement of humans to make certain judgement calls in censoring information within a strict moral paradigm — at least until artificial intelligence is advanced enough to follow very complex rules and to flexibly create further rules. But this raises the question of who decides on the paradigm. "Liberté, égalité, fraternité "and "the pursuit of happiness" mean different things to different people, and more importantly, they mean absolutely nothing to a piece of software. Someone needs to translate fundamental values into software algorithms. Whoever that is holds the ultra-powerful position of influencing billions of people.
The future poses difficult philosophical questions for Facebook. It has gone from being a means to share cute photos of cats and babies, to a platform where influential ideas reside. This is great power and with it comes great responsibility. If Facebook gets this wrong at either end, users can abandon it very quickly, which is a risk mostly to the company's investors. But there are far bigger risks associated with Facebook making wrong decisions, such as threats to freedom of speech and democracy, and they highlight the growing importance of technology in future lives.