Facebook Inc. said it labeled 167 million user posts this year for including information about COVID-19 that was “debunked” by the social network’s fact-checkers.
The warning labels on coronavirus falsehoods were added to posts since March for violating the social network’s misinformation policies, executives said Thursday. An additional 12 million user posts were removed entirely from Facebook and Instagram for COVID-19 misinformation that the company believed could lead to immediate physical harm.
The numbers were disclosed Thursday alongside Facebook’s quarterly Community Standards report, which details how much content the company removes for rule violations. Misinformation appears to be Facebook’s most widespread problem, though the company removed or labeled millions of posts in other categories as well, including bullying, harassment and spam.
Facebook also divulged the prevalence of hate speech on its service for the first time. The company estimates that hate speech content accounts for 0.11 per cent of all content views on the service. Facebook is often criticized for its handling of hate speech and other offensive content, but the company said it removed or labeled 22.1 million posts about hate speech in the third quarter, and detected roughly 95 per cent of those posts before users reported them.