Content Moderation and Shadow Banning

If you’ve spent any time on social media you’ve no doubt seen and heard things that were offensive and, quite possibly, even dangerous. Assuming you live in the USA, you also know that the “constitutional” right to Free Speech is a well-established right that goes to the heart of who we are and what we value. The challenge then is to know the fine line between constitutionally protected speech, and speech that is so harmful or dangerous as to pose a credible threat to actual life or property.

Social media platforms know that they do not have an obligation to grant users their “constitutional” rights. While their very size and oligopolistic nature give us cause for concern, they are not the government* and users are free to pick and choose which platforms and services they want to use. Those platforms are, likewise, allowed to make policies that allow certain types of speech and disallow others. Users who fail to abide by those policies can be warned, suspended, or banned.

So what to do about reports that platforms like Facebook, Youtube, Twitter, and even Pinterest are banning or demonetizing users who have come to reply on these platforms for their livelihood? In extreme cases such as death threats, doxing, and online bullying/harassment, the actions taken by these social media platforms appear to be justified. But what about provocative posts that deploy humor or satire to make a political statement? And what about users who fail to adhere to a standard that is difficult (some say impossible) to understand because it is so subjective?

According to a recent essay by Will Oremus,

The underlying problem of our platforms is not that they’re too conservative or too liberal, too dogmatic or too malleable. It’s that giant, for-profit tech companies, as currently constructed, are simply not suited to the task of deciding unilaterally what speech is acceptable and what isn’t.

On top of this is the fact that these are global platforms struggling to satisfy laws and regulations in an extremely wide range of contexts. Political, cultural, religious, ethnic and racial differences make it nearly impossible to avoid offending some segment of a potential audience that could reach into the billions. And too often it is the loudest voices, regardless of the strength of their argument, that get the most attention.

Again, according to Oremus…

In a good legal system, decisions may be controversial, but at least the rationale is clearly laid out, and there’s a body of case law to serve as context. But when Facebook decides to de-amplify a doctored video of House Speaker Nancy Pelosi, or YouTube opts to demonetize Steven Crowder’s channel, there’s no way to check whether those decisions are consistent with the way they’ve interpreted their rules in the past, and there’s no clear, codified way to appeal those decisions.

So, if you’re easily offended by either gritty content or over-aggressive thought policing…or if you’re counting on a social media platform to help you build an audience and then treat you fairly when you want to push the envelope, remember…you get what you pay for.

*For more about the distinction between government and private industry and the legality of regulating speech, see this article by Reuters.

Leave a Reply