The brand reckoning over content moderation
In the last week, major American companies have led an exodus from Facebook, announcing that they would suspend advertising on the platform because of concerns it isn’t doing enough to combat hate speech. Coca-Cola, Levi’s, and Starbucks are among the more than three hundred major companies that have now pulled their ad dollars from the platform, according to a list maintained by the activist group Stop Hate for Profit.
The industry response has been swift. Facebook announced both that it would be implementing a number of new policy changes, and that it would be banning the Boogaloo movement after several of its adherents were tied to real-world violence. Likewise, Reddit this week banned r/the_donald, a subreddit that for years had hosted violent and hateful speech posted by President Trump’s more virulent supporters. YouTube, meanwhile, kicked a rash of white supremacist accounts off the platform. The streaming platform Twitch banned President Donald Trump’s official account along with a number of streamers accused of sexual abuse.
For anti-racism and (some) civil liberties activists, this week’s takedowns represent a major victory. For years, white supremacist groups have used online platforms to spread their ideology and used cries of censorship as a cudgel against their enemies. By removing many of these actors from YouTube, Twitch, and Reddit, Silicon Valley has struck a major blow against online hate that’s increasingly having real-world consequences.
But this piecemeal approach to regulating online speech clearly isn’t sustainable. Without clear policy or regulatory guidelines, companies are left to adjudicate content moderation on an ad hoc basis. In many cases this leads to moderation by brand risk: Companies must decide whether the greater risk lies in leaving potentially harmful content up and appearing to be a brand that condones hate speech, or taking too much content down and appearing to be a brand that favors censorship.
With companies like Facebook, Google, and Twitter playing an ever larger role in shaping the public sphere, the perception that they are governing speech based on commercial pressure opens them up to potential regulatory scrutiny, public criticism, and raises questions about whether speech is being fairly policed on the platforms. The decisions made by companies this week, for example, come with little transparency about how they were made, and it’s unclear whether users who might have been wrongfully removed from a platform have any recourse. The consequences for free speech online are clearly severe—and equally difficult to measure.
So where do we go from here? If there was an easy solution to cleaning up the cesspool that is the internet, that lever would have been pulled long ago. Color of Change, one of the activist groups behind the current ad boycott, wants to see systemic change in Silicon Valley. The internet scholar Joan Donovan wants to see internet companies embrace curation—and thinks they should start by hiring thousands of librarians. TechStream’s Chris Meserole has written about how in moments of crisis Facebook could adopt a set of emergency content-moderation rules.
Meet the Boogaloos. Facebook’s decision to ban Boogaloo content on its platform saw the company take action against one of the more interesting right-wing groups to have emerged online in recent years. The group’s origins are online and build on a loosely connected series of memes that refer to a coming civil war. The group’s members tend to be heavily armed and have been showing up to anti-police violence protests in recent weeks in an effort to co-opt the movement sparked by the killing of George Floyd, as Alex Goldenberg, Joel Finkelstein, and John Farmer write for TechStream this week.
—Elias Groll (@EliasGroll)