Facebook expands efforts to halt spread of false information

Apr 10, 2019

Share

Facebook Inc. (FB.O) said it’s rolling out a slew of new and expanded ways to rein in the spread of misinformation across its websites and apps, amid heightened global scrutiny of social networks’ actions to remove false and violent content.

The company said Wednesday that the Associated Press will expand its role as part of Facebook’s third-party fact-checking program. Facebook also will reduce the reach of groups that repeatedly share misinformation, such as anti-vaccine views; make group administrators more accountable for violating content standards; and allow people to remove posts and comments from Facebook groups even after they’re no longer members.

Facebook’s executives for years have said they’re uncomfortable choosing what’s true and false. Under pressure from critics and lawmakers in the U.S. and elsewhere, especially since the flood of misinformation during the 2016 U.S. presidential campaign, the social-media company with two billion users has been altering its algorithms and adding human moderators to combat false, extreme and violent content.

“There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” Guy Rosen, Facebook’s vice-president of integrity, and Tessa Lyons, head of news feed integrity, wrote in a blog post. “We’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this.”

While Facebook has updated its policies and efforts, content that violates the company’s standards persists. Most recently, the social network was criticized for not quickly removing the video of the mass shooting in New Zealand that was live streamed.

The U.S. 2020 elections will be a test for the new efforts, which come after the platform was used by Kremlin-linked trolls in the leadup to voting in 2016 and 2018. The scope of election integrity problems is "vast," ranging from misinformation designed to suppress voter turnout to sophisticated activity "trying to strategically manipulate discourse on our platforms," said Samidh Chakrabarti, a product management director at Facebook.

Facebook is looking to crack down on fake accounts run by humans. "The biggest change since 2016 is that we’ve been tuning our machine learning systems to be able to detect these manually created fake accounts," Chakrabarti said, adding that the platform removes millions of accounts — run by both bots and humans — each day.

The Menlo Park, California-based company has made progress in detecting and removing misinformation designed to suppress the vote — content ranging from fake claims that U.S. Immigration and Customs Enforcement agents were monitoring the polls to the common tactic of misleading voters about the date of an election. Facebook removed 45,000 pieces of voter-suppression content in the month leading up to the 2018 elections, 90 percent of which was detected before users reported it.

"We continue to see that the vast majority of misinformation around elections is financially motivated," said Chakrabarti. As a result, efforts to remove clickbait benefit election integrity, he said.