14.1. What Content Gets Moderated#
Social media platforms moderate (that is ban, delete, or hide) different kinds of content. There are a number of categories that they might ban things:
14.1.1. Quality Control#
In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam [n1], mass-produced unsolicited messages, generally advertisements, scams, or trolling.
Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content.
14.1.2. Legal Concerns#
Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child sexual abuse material (CSAM).
So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia).
With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously [n2].
14.1.3. Safety#
Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment.
14.1.4. Potentially Offensive#
Another category is content that users or advertisers might find offensive. If users see things that offend them too often, they might leave the site, and if advertisers see their ads next to too much offensive content, they might stop paying for ads on the site. So platforms might put limits on language (e.g., racial slurs), violence, sex, and nudity. Sometimes different users or advertisers have different opinions on what should be allowed or not. For example, “The porn ban of 2018 was a defining event for Tumblr that led to a 30 percent drop in traffic and a mass exodus of users that blindsided the company” [n3].