US News

Covid-19 slowed Facebook’s moderation for suicide, self-injury and child exploitation content

Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC.

Yasin Ozturk | Anadolu Agency | Getty Images

Facebook on Tuesday disclosed that its ability to moderate content involving suicide, self-injury and child exploitation was impacted by the coronavirus between the months of April and June.

Facebook said it was also unable to measure how prevalent violent and graphic content, and adult nudity and sexual activity were on its services during this time, according to the report. The amount of content appeals Facebook was able to review during this period was “also much lower.”

The company, which relies on artificial intelligence and humans for its content moderation, was forced to work with fewer of its human moderators throughout the early months of quarantine. The absence of the human moderators reduced the amount of content it was able to take action on, the company said in the latest version of its Community Standards Enforcement Report.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram,” the company said in a blog post. “Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.  

“Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology.”

Despite Covid-19’s limitations on its human moderators, Facebook said it was able to improve in other areas through its AI technology. Specifically, the company said it improved its proactive detection rate for the moderation of hate speech, terrorism, and bullying and harassment content. 

The company claims many of its human reviewers are now back online moderating content from their homes.

“As the COVID-19 pandemic evolves, we’ll continue adapting our content review process and working to improve our technology and bring more reviewers back online,” the company said in a statement. 

Facebook CEO Mark Zuckerberg had warned in May that the company’s ability to properly moderate content had been impacted by Covid-19. 

View Article Origin Here

Related Articles

Back to top button