In many countries, Facebook is one of the few alternatives to the government-aligned outlets that dominate national media ecosystems. That is why authorities have devoted so many resources to manipulating it, and why the company must take responsibility for stopping them.
WASHINGTON, DC – I’ve been a professional free-expression advocate for more than a decade. That is why I support the Facebook Oversight Board’s recent decision to uphold former President Donald Trump’s suspension from the platform and Facebook’s new protocol whereby public figures may be banned for up to two years during times of civil unrest. In fact, the platform should go further.
Trump used his bully pulpit on social media to attack and harass news organizations, political opponents, and former political allies. He used it to undermine confidence in the 2020 presidential election, with a significant share of Americans continuing to doubt the outcome, despite the absence of any evidence of widespread irregularities or fraud. And he used it to perpetuate misinformation during the COVID-19 pandemic.
In other words, with the help of social-media platforms, Trump undermined the norms and institutions that underpin the functioning of representative government, while increasing the coronavirus death toll in the United States. And he engaged in precisely the kinds of harassment and hate speech that social-media platforms prohibit.
Yet, for four long years, Trump was given a pass for this behavior, because platforms deemed carrying his statements, however erroneous or dangerous, to be in the public interest. Facebook introduced this so-called newsworthiness exemption shortly before the 2016 presidential election.
But there is a circular logic at work here: world leaders are exempted from following community standards because their statements are “newsworthy,” but it is the incendiary nature of standard-violating posts that causes them to make the news. In any case, world leaders – especially the US president – can get news coverage whenever they want, simply by holding a press conference or issuing a press release.
The suspension of Trump’s social-media accounts after he incited the January 6 insurrection at the US Capitol was clearly a step in the right direction. Twitter has since made its ban permanent. But Facebook left the door open for Trump to return to its platform.
Facebook’s Oversight Board upheld the original suspension, but took issue with its indefinite nature, arguing that the company should not devise rules on the fly, and must develop “clear, necessary, and proportionate policies that promote public safety and respect freedom of expression.” Crucially, Facebook’s response must be “consistent with the rules that are applied to other users of its platform.”This is where the Oversight Board got things wrong. Yes, consistency should be a goal. But world leaders are not just any users; they should be held to a higher standard. After all, they can incite violence far more easily than the average Joe or Jane can. Moreover, much of what happens on social media challenges existing norms. In exceptional circumstances, exceptional decisions must be made.
Facebook’s new policy at least partly recognizes this. Its policy states: “Our standard restrictions may not be proportionate to the violation, or sufficient to reduce the risk of further harm, in the case of public figures posting content during ongoing violence or civil unrest.”But this logic must be applied more broadly. Trump is hardly the only world leader who has used social-media platforms to incite and manipulate public opinion with the help of tools like computational propaganda and astroturfing. And, while Facebook has acted against such abuses in countries like the US, South Korea, and Poland, it has so far done little or nothing in countries like Iraq, Honduras, and Azerbaijan.
This discrepancy is no accident. The data scientist Sophie Zhang recently revealed that during her 2.5 years at Facebook, she found “blatant attempts” to abuse the platform by dozens of governments seeking to “mislead their own citizenry.” Yet Facebook repeatedly refused to act. According to Zhang, “We simply didn’t care enough to stop them.”Beyond apathy, Facebook may have been seeking to protect its own business interests, which probably explains why company executives reportedly protected members of India’s ruling Bharatiya Janata Party from punishment for violating the platform’s hate-speech policies. Even regimes that block their own populations from accessing Facebook – including in China, Iran, and Uganda – are allowed to use the platform for their own ends.
Facebook’s reluctance to act against such governments has had dire consequences. A statement by Alex Warofka, Facebook’s product policy manager, notes that the company “improved proactive detection of hate speech” in Myanmar, and began taking “more aggressive action” against accounts set up to “mislead others about who they are or what they’re doing.” By that point, however, Facebook had facilitated mass atrocities against the country’s mostly Muslim Rohingya minority group. Likewise, while Facebook removed the Myanmar military’s official page in February for “incitement of violence,” it did so only after the military had overthrown the country’s democratically elected government.
content or accounts that are not illegal. Regulators in the US and the European Union are considering whether some elements of the internet should be treated essentially as public utilities or “common carriers.” But, overall, regulators need to focus less on content and more on platform design, advertising technology, and monopolistic power.
In the meantime, it is up to Facebook to rid itself of genocidal militaries, government propaganda that targets and manipulates populations, and leaders who block users. The algorithmic intermediation of the public sphere by private, for-profit platforms designed to maximize engagement and polarization has been anything but emancipatory. For many, it has been deadly. Governments, public officials, and political parties must face swift and severe consequences if they violate a platform’s terms of service and use it to violate people’s rights.
Courtney C. Radsch, former advocacy director at the Committee to Protect Journalists, is the author of Cyberactivism and Citizen Journalism in Egypt: Digital Dissidence and Political Change.
The text has been adapted from Project Syndicate website