Skip Navigation

Deplatforming Authoritarians is Good; Preventing their Rise to Power is Better.

Image depicts a map of Southeast Asia with a Facebook logo overlaying Myanmar

“By giving people the power to share, we’re making the world more transparent,” wrote Mark Zuckerberg in a letter to the public in 2012. What the company’s founder and CEO failed to foresee, however, is that less than a decade later this “power to share” would be systematically weaponized by authoritarian leaders around the world. 

On Thursday, February 24th, Facebook announced it would ban all Myanmar military pages from their sites as well as all advertisements from military-affiliated businesses. Over the past few years, Facebook has been under increasing pressure from the UN to crack down on misinformation in Myanmar following a 2018 report concluding that military propaganda being shared on Facebook was primarily responsible for the ethnic cleansing of the nation’s Rohingya Muslim population. The banning of Tatmadaw mirrors the company’s move to ban President Donald Trump after his incitement of violence at the Capitol on January 6th. Just as the UN has long pleaded with Facebook to remove disinformation regarding Rohingya Muslims, the American left has consistently called on social media companies to crack down on Donald Trump’s spread of misinformation. 

While Facebook had made strides to limit hate speech and misinformation over the years leading up to these high-profile bans, their efforts primarily consisted of banning individual citizen’s posts and accounts. After the attempted coup in the United States—and even more palpably, the widespread bloodshed in Myanmar—it is clear that handling these incidents on a case by case basis is entirely unsustainable and ineffective. Deplatforming authoritarian leaders—whether they be actual dictators or simply strongmen on social media—was a crucial step in promoting peace in both of these nations, and undoubtedly should have been taken sooner. However, blocking these accounts ultimately does little to lessen the threat of violence so long as Facebook’s algorithms continue to breed and promote extremism.

On the base level, Facebook’s algorithm promotes news content that garners the most engagement, which in turn generates the most ad revenue for the company. By playing into users’ sense of tribalism, inflammatory news articles that evoke strong negative emotions tend to perform better than content that doesn’t. This ‘if it bleeds, it leads’ phenomenon is true for news sources across the board, with news magazine sales increasing an average of 30% when the cover story features a negative headline rather than a positive one. However, on Facebook, this effect can snowball with sensationalized articles achieving higher click-through rates, causing endless sharing and promotion. The algorithm’s propensity to share inflammatory content makes it the perfect breeding ground for disinformation—a phenomenon that Myanmar’s military realized and sought to exploit.

“Members of the Myanmar military were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade and that targeted the country’s mostly Rohingya minority group,” reported the New York Times in 2018. Tatmadaw engaged in what experts call seeded disinformation, in which mistruths are spread and then amplified by so many users that content moderators are unable to keep up. Specifically, Tatmadaw created hundreds of dummy news sites, troll accounts, and other seemingly innocuous pages to systematically distribute disinformation to the Burmese public. This disinformation campaign became the catalyst for widespread violence against the nation’s Rohingya Muslim minority. Following a 2017 massacre in which the military killed over 6,700 Rohingya Muslims in less than one month, over 800,000 Rohingya Muslims have fled to Bangladesh hoping to avoid further persecution. 

A similar social media strategy was used during the lead-up to the US Capitol Riot on January 6th. Donald Trump and other Republican officials sowed the seeds for this riot by spreading false claims of election fraud to their large audiences. These claims were then disseminated across social media, where they developed into full-fledged plans. According to the Washington Post, dozens of rioters used Facebook to coordinate transportation to and from the Capitol. Content moderators were none the wiser. 

Much of Facebook’s content moderation is conducted by human moderators. Even though Facebook has dramatically increased its workforce, particularly its number of Burmese speakers within the last few years, it remains virtually impossible for these workers to effectively moderate over 2 billion accounts. These overworked and underpaid moderators play a constant game of whack-a-mole: as soon as they shut down one conspiracy site, another has already popped up to take its place. Considering this, the company moved to create artificial intelligence that can help moderate content in 2019. However, this new method is far from foolproof. Facebook has not yet developed hate speech algorithms that work for every language, and, as the company itself admits, human moderators still need to continuously update existing algorithms as novel forms of hate speech arise. However, even if Facebook succeeded in removing every single post featuring disinformation or hate speech from its sites, it is critical to remember that the algorithm not only promotes extremist content but also actively breeds it. 

When the algorithm detects interest in a certain area, it recommends related content. However, this too can have a snowball effect where related content moves further and further to the fringe the longer the cycle continues. This process often leads to radicalization, which has been a crucial tool for members of the far right in the United States. For example, a teenage boy might begin by watching Ben Shapiro’s conservative takedowns but progress to viewing alt right conspiracy theories just from following Facebook’s recommendations. In fact, a 2016 presentation from Facebook itself concluded, “64% of all extremist group joins are due to our recommendation tools,” and that most of the activity came from the platform’s ‘Groups You Should Join’ and “Discover” algorithms”. 

As evidenced by the recent high-profile bannings of Tatmadaw and former President Donald Trump, Facebook is becoming increasingly comfortable taking a stand against public figures who defy their code of conduct. However, this method of rooting out extremism ultimately addresses the symptoms, not the disease. While banning bad faith actors may be helpful in the short term, so long as their divisive algorithm continues to elevate these leaders and their harmful ideas in the first place, Facebook remains just as vulnerable to abuses moving forward. 

It’s not that the algorithm itself is beyond repair –– from implementing so-called  “Sparing Sharing” to broadening the variety of algorithm-produced recommendations, experts have found several ways to make the algorithm less polarizing. However, Facebook has yet to apply any of these recommendations on a wide scale, and, given that doing so would likely mean a hit to the company’s profits, it is unlikely that they will do so any time soon. Therefore, if we really want to prevent social media sites from becoming breeding grounds for conspiracy theories, government regulation is the only plausible solution. Just as the FCC has regulated broadcasters under a public interest framework, policymakers should work to ensure that Facebook and its algorithms are regulated in a similar manner. 

While it was essential that Facebook moved to deplatform these authoritarian leaders, the threat of disinformation remains just as prevalent so long as Facebook continues to profit from promoting extremist content.  Moving forward, it is this underlying incentive structure that must be regulated in order to prevent future authoritarians from weaponizing social media sites in their rise to power. 

Image: Asia Times

SUGGESTED ARTICLES