- Advertisement -
World

Internal bug ‘promoted problematic content’ on Facebook

But according to parent company Meta, the bug affected 'only a very small number of views' of content.

AFP
2 minute read
Share
Under its fact checking programme, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialised fact checkers, on its platform, WhatsApp and on Instagram. Photo: Pexels
Under its fact checking programme, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialised fact checkers, on its platform, WhatsApp and on Instagram. Photo: Pexels

Content identified as misleading or problematic were mistakenly prioritised in users’ Facebook feeds recently, thanks to a software bug that took six months to fix, according to tech site The Verge.

Facebook disputed the report, which was published Thursday, saying that it “vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content,” according to Joe Osborne, a spokesman for parent company Meta.

But the bug was serious enough for a group of Facebook employees to draft an internal report referring to a “massive ranking failure” of content, The Verge reported.

In October, the employees noticed that some content which had been marked as questionable by external media – members of Facebook’s third-party fact-checking programme – was nevertheless being favoured by the algorithm to be widely distributed in users’ News Feeds.

“Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11,” The Verge reported.

But according to Osborne, the bug affected “only a very small number of views” of content.

That’s because “the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place,” Osborne explained, adding that other mechanisms designed to limit views of “harmful” content remained in place, “including other demotions, fact-checking labels and violating content removals.”

AFP currently works with Facebook’s fact checking programme in more than 80 countries and 24 languages. Under the programme, which started in December 2016, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialised fact checkers, on its platform, WhatsApp and on Instagram.

Content rated “false” is downgraded in news feeds so fewer people will see it. If someone tries to share that post, they are presented with an article explaining why it is misleading.

Those who still choose to share the post receive a notification with a link to the article. No posts are taken down. Fact checkers are free to choose how and what they wish to investigate.