Moderating the content purely depends on the online platform and types of content. In voice-based communities i.e Clubhouse, as soon as something is said, it’s gone. In text-based chats, moderation is based on the prerequisite that there’s something to remove.
Let’s have a look at a few popular social media content moderation techniques listed below.
Automated moderation
AI-based content moderation services are very popular with social media platforms these days. These content moderation services can train AI – based on your custom rules engine — to autonomously help identify content that might violate your app’s guidelines. This approach is often used in a combination with manual moderation to improve reliability and the overall response time.
Social apps like Facebook and some dating apps such as Tinder, where the risk of unwanted or harmful content looms eternally, are using user data to train their AI to improve on automated moderation.
Despite significant improvements, AI-based automation has still a long way to go. At CitrusBits, we have worked with third-party AI-based content moderation tools like the Hive and live streaming solutions like Agora to build robust and scalable social audio and video apps. Contact us if you’d like to discuss your social audio or video app venture.
Human-based approach
Voice is trickier than text or visuals. And real-time audio and video streaming can be even trickier to moderate. Social audio streaming apps like Clubhouse are generally complex in nature and therefore have to rely on human-based moderation. For instance, the video news and debate app, Erupt, CitrusBits created for a team headed by Emmy-winning film producer Edward Walson and former ABC News/GMA producer Bryan Keinz required a human-based moderation approach. The human-based approach sits well with these types of social apps owing to their live-streaming characteristics.
This approach is further segmented into pre and post moderation. With pre-moderation, all user-submitted content is reviewed before it goes live on a site or app. A moderator using the content management system (CMS) evaluates each piece of content and decides whether to publish, reject, or amend it based on the site’s criteria.
This also means that pre-scanning does not permit real-time posting, a major reason why some companies are reluctant to use this approach.
The post-moderation technique on the other hand exposes harmful content on your app or website right away while duplicating it in a queue so that a moderator may examine it after it goes live. In response to consumers’ need for prompt posting, dating apps, some social sites, and other platforms frequently use post-moderation techniques.
This approach comes with significant risks. For example, dating sites receive tens of millions of photographs every day. Companies take a major risk by authorizing these photographs to go live on a site before vetting them. Even if the content is removed, it is sometimes too late because users have already screenshotted and shared the experience.
The hybrid approach
This is the approach that most mainstream social media platforms like Facebook, Twitter, and TikTok are using at the moment. Before any blatantly objectionable content (such as hate symbols or obscene gestures) may go live, AI is employed to detect and reject it, while allowing any content with a low probability of being unsuitable to go live right away. Any content that doesn’t meet the guidelines and policies can be held back for a human team to examine a few minutes later for a more subtle and thorough check.