(Bloomberg) -- Facebook Inc. removed tens of millions of user posts in the past six months for violating its terms of service regarding child pornography, drug sales and terrorism. Millions more were removed from Instagram.

That’s according to a bi-annual report released Wednesday by Facebook that details how the social media company enforces its own content policies. The report, which for the first time includes data from Instagram, said that Facebook identifies most of the content it removes automatically using its own software algorithms.

The numbers provide a reminder of the scale at which Facebook operates. Some of the highlights from the report:

  • Facebook removed 11.6 million pieces of content related to child pornography in the quarter ended in September. Facebook says its algorithms identified 99% of that content. Instagram removed another 754,000 pieces of content, with an automatic detection rate of just under 95%. By comparison, in the first quarter, Facebook removed just 5.8 million pieces of content related to child porn or exploitation.
  • Facebook removed 4.4 million pieces of content related to drug sales in the third quarter, and another 2.3 million related to firearm sales. That was up from 841,000 and 609,000 pieces respectively six months earlier.
  • Facebook said that 80% of the hate speech it removed from the service in Q3 was detected by its software systems. That’s up from 68% of the hate speech removed in the first three months of the year.
  • Terrorism content is slightly harder to identify on Instagram than on Facebook. Facebook proactively identified 98.5% of all terrorism content - including 99% related to Al Qaeda and ISIS. Instagram removed 92.2% of terrorism content using software algorithms.

To contact the reporter on this story: Kurt Wagner in San Francisco at kwagner71@bloomberg.net

To contact the editors responsible for this story: Jillian Ward at jward56@bloomberg.net, Molly Schuetz

©2019 Bloomberg L.P.