• Thursday, April 25, 2024
businessday logo

BusinessDay

Social media & free speech in Africa (2)

To election agitators; freedom of speech is not freedom from consequences

Stengel’s description of how Russia’s Internet Research Agency (IRA) works provides insights into alternative but likely more effective strategies for fighting online disinformation. “Every day, in two shifts, a few hundred young people spend their time writing blog posts, tweets, Facebook posts, Vkontakte posts and much more…it is indeed a factory; they manufacture thousands upon thousands of pieces of pro-Russian, anti-American content a day.” Although the Russian IRA is owned by a businessman allied to the Russian government, every government should probably have an IRA-type outfit; much like most governments have their own news agencies and broadcasters.

The suggestion is not that a government-owned social media agency should engage in some of the negative activities Russia’s IRA is accused of. Still, such an agency would probably be the more appropriate public bulwark against the purveyors of fake news and disinformation on social media. It is certainly a better proposition than the potentially free-speech stifling social media regulations being proposed by some African governments and in fact already in practice in Singapore and elsewhere. That said, some regulation of social media has clearly become necessary. The key would be for any regulation to be geared towards incentivizing accurate news reportage. In other words, new media practitioners should be made to have as much fear of punishment as their counterparts in old media. To succeed, the active cooperation of big tech would be required. But would they choose to be part of the solution?

In practice, considering the robust political lobbying apparatus of American big tech firms, this would be a herculean task. But if African governments and others truly desire a curb on disinformation, it would not be untoward for them to be part of a global effort towards forcing the US Congress’ hand, via the United Nations

Make platforms liable for their content

Social media platforms are probably conflicted. That is, if you infer correctly from Harvard professor Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism: The fight for a human future at the new frontier of power, where she asserts “fake news and other forms of information corruption have been perennial features of Google and Facebook’s online environments…[with]…countless examples of disinformation that survived and even thrived because it fulfilled economic imperatives”. Stengel corroborates this view: “The players in this conflict are assisted by the big social media platforms, which benefit just as much from the sharing of content that is false as content that is true.” Even more bluntly, Stengel asserts “popularity is the measure [social media platforms] care about, not accuracy or truthfulness.” Still, as Zuboff adds, “they [Facebook & others] absolutely have the tools to shut down fake news”. But they have to be willed to do so conscientiously. And there is evidence they do so when prompted by governments.

Still, in America, where most of the top global social media platforms are headquartered, there is limited incentive for them to combat disinformation. Their complacency stems from the United States’ Communications Decency Act (CDA) of 1996 which provides immunity to social media platforms from being liable for published content. The part of particular interest in that American legislation reads thus: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Were this part of the legislation to be modified, it is well-known these platforms are perfectly capable of policing mischievous content with super efficiency. Thus, incentivising them via liability for disinformation might be ideal.

In practice, considering the robust political lobbying apparatus of American big tech firms, this would be a herculean task. But if African governments and others truly desire a curb on disinformation, it would not be untoward for them to be part of a global effort towards forcing the US Congress’ hand, via the United Nations perhaps – not that that has ever successfully forced the Americans to do anything they didn’t want to do – towards such a modification. Even so, Stengel sees a way to make changes to the legislation that may likely be acceptable to social media platforms: “One way to do this is to revise the language of the CDA to say that no platform that makes a good faith effort to fulfil its responsibility to delete harmful content and provide information to users about that content can be liable for the damage that it does.” Still, Stengel adds, “for all this to work, we need global privacy regulations, a universal definition of disinformation and legal consequences for purveying it.”

In any case, Singapore is pushing ahead with fighting fake news on its own; albeit it could be argued its intentions are not entirely altruistic. In November 2019, for instance, the Singaporean government ordered Facebook to issue a disclaimer (“Facebook is legally required to tell you that the Singapore government says this post has false information”) on a post by a local newspaper that it considered inaccurate. But how many of these orders can it issue? Its progress in this regard is likely to be incremental. A global effort, with America in the lead, under the auspices of the United Nations, would likely be more effective. In the absence of such a global coalition, hard-fought democratic gains around the world would be increasingly eroded; especially in the poor parts of the world, like Africa, where they are mostly needed.

 

Rafiq Raji