Latest Videos

{{ currentStream.Name }}

Related Video

Continuous Play:

The information you requested is not available at this time, please check back again soon.

More Video

May 23, 2019

Facebook removes a record 2.2 billion fake accounts

The Facebook Inc. website is displayed on an Apple Inc. Macbook Air laptop in an arranged photograph taken in New York, U.S., on Thursday, July 26, 2018. Facebook shares plunged 19 percent Thursday after second-quarter sales and user growth missed Wall Street estimates.

Security Not Found

The stock symbol {{StockChart.Ric}} does not exist

See Full Stock Page »

Facebook Inc. said it removed 2.2 billion fake accounts in the first quarter, a record that shows how the company is battling an avalanche of bad actors trying to undermine the authenticity of the world’s largest social network.

In the final quarter of 2018, Facebook disabled just more than 1 billion fake accounts and 583 million in the first quarter of last year. The vast majority are removed within minutes of being created, the company said, so they’re not counted in Facebook’s closely watched monthly and daily active user metrics. 

“The larger quantities of fake accounts are driven by spammers who are constantly trying to evade our systems,” Guy Rosen, Facebook’s vice president of integrity, told reporters Thursday. Rosen didn’t attribute the spam accounts to any specific group or entity.

Facebook also shared a new metric in Thursday’s report: The number of posts removed that were promoting or engaging in drug and firearm sales. Facebook pulled more than 1.5 million posts from these categories in the first three months of this year, and said that it would eventually like to expand its report to include other types of illegal activity.

Medium CEO on subscription model: 'People will actually pay'

Ev Williams, co-founder of Twitter and CEO of online publishing platform Medium, joins BNN Bloomberg's Jon Erlichman at the Collision Conference in Toronto. Williams discusses how he started his career in Nebraska, the company's transition into a subscription-based model, and the changing media landscape.


Facebook Chief Executive Officer Mark Zuckerberg said the company has vastly increased its spending to police its products.

“The amount of capital that we are able to invest in all of the safety systems that go into what we are talking about today -- our budget in 2019 is greater than the whole revenue of our company in the year before we went public in 2012,” he said.

Tuesday’s report is a striking reminder of the scale at which Facebook operates -- and the size of its problems. The company has been under constant criticism about its content policies and efforts to detect fake accounts since the 2016 U.S. presidential election, when Russia used the social network to try to sway voters.

Facebook has promised repeatedly to be better at detecting and removing posts that violate its policies, and has pledged that artificial intelligence programs would be at the center of those efforts. Facebook said it’s simply too big to monitor everything on its service with humans alone. It may soon run into some trouble with this approach. Zuckerberg has said the company will increase the privacy and encryption of its products, a move that will make it harder for Facebook to find and remove content.

Zuckerberg admitted the privacy push will have “tradeoffs.”

“We recognize that it’s going to be harder to find all of the different types of harmful content,” he said Thursday. “We’ll be fighting that battle without one of the very important tools, which is, of course, being able to look at the content itself. It’s not clear on a lot of these fronts that we’re going to be able to do as good of a job on identifying harmful content as we can today with that tool.”

The new statistics come via the social-media giant’s third content transparency report, a biannual document that outlines Facebook’s efforts to remove posts and accounts that violate its policies.

Facebook’s AI algorithms work well for some issues, such as graphic and violent content. Facebook detects almost 99 per cent of all graphic and violent posts it removes before a user reports them to the company. But they are far from perfect. Facebook still can’t consistently detect graphic or violent content in live videos, for example, a blind spot that allowed a shooter to broadcast his killing spree at a New Zealand mosque earlier this year.

The software also hasn’t worked as well for more nuanced categories, like hate speech, where context around user relationships and language can be a big factor. 

Still, Facebook says it’s getting better. Over the past six months, 65 per cent of the posts Facebook removed for pushing hate speech were automatically detected. A year ago, that number was just 38 per cent.

Zuckerberg also dismissed repeated calls to break up Facebook, which have come from numerous presidential hopefuls and even the company’s co-founder.

Facebook, he said, repeating an argument he has made many times before, has a lot of competition when it comes to communication products. Zuckerberg said Facebook’s success enables the company to spend more on safety and security efforts than almost all of its peers, many of which are dealing with similar issues.

“I think the amount of our budget that goes toward our safety systems – I believe is greater than Twitter’s whole revenue this year,” he said. “So we’re able to do things that I think are just not possible for other folks to do.”