Facebook has been leveraging AI to enhance its content moderation efforts for some time. Recently, the company announced an innovative approach that employs machine learning to streamline the moderation process.
Previously, posts were sorted in a chronological order based on when they were reported, which often delayed responses to urgent cases. The new system utilizes multiple machine learning algorithms to prioritize flagged content according to its virality, severity, and the likelihood of rule violations.
In the first quarter of 2020 alone, Facebook took action against 9.6 million pieces of content, a notable increase from 5.7 million in the previous quarter. While some violations can be automatically blocked or removed, others enter a queue for evaluation by human moderators. This process can take a toll on mental health, leading to a settlement earlier this year with approximately 11,000 moderators for $52 million. In response, Facebook has committed to enhancing its content moderation software, including muting audio by default and displaying videos in black and white.
As Facebook remains a primary platform for global communication, its capacity to effectively manage fake news and hateful content is essential for maintaining a safe environment for users.