• Saturday, July 27, 2024
businessday logo

BusinessDay

Social media’s struggle with self-censorship

Social Media

Within hours of the publication of a New York Post article on October 14th, Twitter users began receiving strange messages. If they tried to share the story—a dubious “exposé” of emails supposedly from the laptop of Hunter Biden, son of the Democratic presidential nominee—they were told that their tweet could not be sent, as the link had been identified as harmful. Many Facebook users were not seeing the story at all: the social network had demoted it in the news feed of its 2.7bn users while its fact-checkers reviewed it.

If the companies had hoped that by burying or blocking the story they would stop people from reading it, the bet did not pay off. The article ended up being the most-discussed story of the week on both platforms—and the second-most talked-about story was the fact that the social networks had tried to block it. The Post called it an act of modern totalitarianism, carried out “not [by] men in darkened cells driving screws under the fingernails of dissidents, but Silicon Valley dweebs.” Republican senators vowed to extract testimony on anticonservative bias from Mark Zuckerberg and Jack Dorsey, the dweebs-in-chief of, respectively, Facebook and Twitter.

The tale sums up the problem that social networks are encountering wherever they operate. They set out to be neutral platforms, letting users provide the content and keeping their hands off editorial decisions. Twitter executives used to joke that they were “the freespeech wing of the free-speech party”. Yet as they have become more active at algorithmically ranking the content that users upload, and moderating the undesirable stuff, they have edged towards being something more like publishers. Mr Zuckerberg says he does not want to be an “arbiter of truth”. The Post episode fed the suspicion of many that, willingly or not, that is precisely what he is becoming.

America’s fractious election campaign has only made more urgent the need to answer the unresolved questions about free expression online. What speech should be allowed? And who should decide? Rasmus Nielsen of the Reuters Institute at Oxford University describes this as a “constitutional moment” for how to regulate the private infrastructure that has come to support free expression around the world.

Read Also: The way forward: Building awareness and enlightenment through social media?

Social networks have been on the mother of all clean-ups. Facebook’s removal of hate speech has risen tenfold in two years (see chart 1). It disables some 17m fake accounts every single day, more than twice the number three years ago. Youtube, a video platform owned by Google with about 2bn monthly users, removed 11.4m videos in the past quarter, along with 2.1bn user comments, up from just 166m comments in the second quarter of 2018. Twitter, with a smaller base of about 350m users, removed 2.9m tweets in the second half of last year, more than double the amount a year earlier. Tiktok, a Chinese short-video upstart, removed 105m clips in the first half of this year, twice as many as in the previous six months (a jump partly explained by the firm’s growth).

Artificial intelligence has helped to make such a clean-up possible. Most offending content is taken down before any user has had a chance to flag it. Some lends itself readily to policing with machines: more than 99% of the child-nudity posts Facebook takes down are removed before anyone has reported them, but most of the bullying or harassment is flagged by users rather than robots. Two years ago Facebook’s AI removed a post referring to “merciless Indian Savages”, before human moderators realised it was a quote from the Declaration of Independence. Facebook now employs about 15,000 people to moderate content. In May the company agreed to pay $52m to 11,250 moderators who developed post-traumatic stress disorder from looking at the worst of the internet.

Discussions about free speech that may once have seemed abstract have become all too practical—the murder of Samuel Paty near Paris last week being the latest shocking reminder. Social networks tightened their policies on terrorism after Islamist attacks in Europe in 2015 and an anti-muslim rampage in New Zealand last year, which was live-streamed on Facebook and shared on Youtube. The American election and Brexit referendum of 2016 forced them to think again about political communication. Twitter banned all political ads last year, and Facebook and Google have said they will ban them around the time of this year’s election on November 3rd.

The companies have also improved their scrutiny of far-flung countries, after criticism of their earlier negligence in places such as Myanmar, where Facebook played a “determining role” in the violence against Rohingya Muslims, according to the UN (see article). This week Facebook announced that it had hired more content-reviewers fluent in Swahili, Amharic, Zulu, Somali, Oromo and Hausa, ahead of African elections. Its AI is learning new languages, and hoovering up rule- breaking content as it does so.

The room where it happens

Some tech bosses have been rethinking their approach to the trade-offs between free expression and safety. Last October, in a speech at Georgetown University, Mr Zuckerberg made a full-throated defence of free speech, warning: “More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous.” Yet this year, as misinformation about covid-19 flourished, Facebook took a harder line on fake news about health, including banning anti-vaccination ads. And this month it banned both Holocaust denial and groups promoting QAnon, a crackpot conspiracy.

The pressure from the media is to “remove more, remove more, remove more”, says one senior tech executive. But in some quarters unease is growing that the firms are removing too much. In America this criticism comes mostly from the right, which sees Silicon Valley as a nest of liberals. It is one thing to zap content from racists and Russian trolls; it is another to block the New York Post, one of America’s highest-circulation newspapers, founded by Alexander Hamilton (who admittedly might not have approved of its current incarnation, under Rupert Murdoch).

Elsewhere, liberals worry that whistle-blowing content is being wrongly taken down. Youtube removed footage from users in Syria that it deemed to break its guidelines on violence, but which was also potential evidence of war crimes. Until last year Tiktok’s guidelines banned criticism of systems of government and “distortion” of historical events including the massacre near Tiananmen Square.

Where both camps agree is in their unease that it is falling to social networks to decide what speech is acceptable. As private companies they can set their own rules about what to publish (within the confines of the laws of countries where they operate). But they have come to play a big role in public life. Mr Zuckerberg himself compares Facebook to a “town square”.

Rival social networks promising truly free speech have struggled to overcome the network effects enjoyed by the incumbents. One, Gab, attracted neo-nazis. Another, Parler, has been promoted by some Republican politicians but so far failed to take off. (It is also grappling with free-speech dilemmas of its own, reluctantly laying down rules including no sending of photos of fecal matter.) Outside China, where Facebook does not operate, four out of ten people worldwide use the platform; Whatsapp and Instagram, which it also owns, have another 3bn or so accounts between them. “Frankly, I don’t think we should be making so many important decisions about speech on our own either,” Mr Zuckerberg said in his Georgetown speech.

Say no to this

Bill Clinton once said that attempting to regulate the internet, with its millions of different sites, would be “like trying to nail Jell-o to the wall”. But the concentration of the social-media market around a few companies has made the job easier.

Twitter has faced steep growth in the number of legal requests for content removal, from individuals as well as governments (see chart 2). Last year Google received 30,000 requests from governments to remove pieces of content, up from a couple of thousand requests ten years ago (see chart 3). And Facebook took down 33,600 pieces of content in response to legal requests. They included a Photoshopped picture of President Emmanuel Macron in pink underwear, which French police wanted removed because it broke a law from 1881 restricting press freedom.

In America the government is prevented from meddling too much with online speech by the First Amendment. Section 230 of the Communications Decency Act gives online platforms further protection, exempting them from liability for the content they publish. But carve-outs to this exemption are growing. Firms cannot avoid responsibility for copyright infringements, posts that break federal criminal law, or which enable sex trafficking. The latter exemption, made in 2018, had an impact on speech that was greater than its drafting implied: sites including Tumblr and Craigslist concluded that, rather than risk prosecution, they would stop publishing adult material of all sorts.

Bill Clinton once said that attempting to regulate the internet, with its millions of different sites, would be “like trying to nail Jell-o to the wall”.