[ad_1]
What’s happening
Facebook parent company Meta revealed in a quarterly report that its media-matching technology had a bug that was later fixed.
Why it matters
The social network said that the bug led to content that didn’t violate its rules mistakenly being pulled down.
Facebook parent company Meta said Tuesday that a bug resulted in content getting mistakenly pulled down in the first three months of this year. The social media giant said it fixed the problem and restored posts that were incorrectly flagged for violating its rules, including against terrorism and organized hate.
Facebook took action against 2.5 million pieces of content that was flagged for organized hate in the first quarter, up from 1.6 million in the fourth quarter of 2021. The social network also took action against 16 million pieces of terrorism content in the first quarter, which more than doubled from 7.7 million in the fourth quarter. Meta attributed the spike to a bug in its media-matching technology. A graph in the company’s quarterly standards enforcement report showed that the social network restored more than 400,000 pieces of content mistakenly flagged for terrorism.
Meta’s photo-and-video service Instagram also took action against more terrorism and organized hate content because of this bug. The error also impacted content flagged for suicide and self-injury. Facebook restored 345,600 pieces of content flagged for suicide and self-injury in the first quarter, up from 95,300 in the fourth quarter because of this issue, the report said.
The errors raises questions about the accuracy of the data Meta provides quarterly about content moderation and how well its automated technology works. The company said it’s been taking more steps to prevent content moderation errors from happening. Meta is testing new AI technology that learns from appeals and content that’s restored. It’s also experimenting with giving people more advanced warnings before the social network penalizes them for rule violations, Meta’s Vice President of Integrity Guy Rosen said in a press call on Tuesday.
Rosen said when a false-positive gets fed into its media-matching technology it will “fan out” and pull down a large amount of content that doesn’t violate the platform’s rules.
“We have to be very diligent about the so-called seeds that go into the system before that fan out occurs. What we had in this case is introduction of some new technology, which introduced some false positives into the system,” he said, adding that content was later restored.
At the same time, Facebook is also facing scrutiny for not removing terrorism content before it goes viral. Over the weekend, live streamed video posted on Twitch from the white gunman accused of killing 10 Black people in a Buffalo grocery store also spread on social networks such as Facebook and Twitter. The Washington Post reported that one copy surfaced on Facebook where it was shared more than 46,000 times and received more than 500 comments. Facebook did not remove the video for more than 10 hours.
Rosen said once the company became aware of the shooting employees quickly designated the event as a terrorist attack and removed any copies of the video and the shooter’s 180-page hate-filled rant.
One of the challenges, he said, is people create new versions of the video or links to try to evade enforcement by social media platforms. The company, like with any incidents, is going to refine its systems to detect violating content more quickly in the future, he said. Rosen added he didn’t have any more details to share about what specific steps Facebook is considering.
[ad_2]
Source link