(Bloomberg) -- Big technology platforms are calling on the European Union to design new rules that would protect them from legal liabilities for actively removing hate speech and other illegal or harmful content.Such a safeguard would result in “better quality content moderation” by incentivizing platforms to remove bad content while protecting free expression, Edima, an association representing Facebook Inc., ByteDance Ltd.-owned TikTok, Alphabet Inc.’s Google and others, said in a paper due to be published Monday. The association said it would send the document to officials in the European Commission, Parliament and Council.

The call comes as the European Commission, the bloc’s executive body, prepares digital policy measures aimed at giving platforms greater responsibility for what users post on their sites. The aim is to curtail the spread of harmful content and illegal products, such as unsafe toys.

Platforms like Facebook and YouTube have come under intense scrutiny in recent years for failing to do enough to police activity such as hate speech that’s blamed for inciting violence in places like Myanmar, or for letting Russians spread disinformation to influence the 2016 U.S. presidential election and the U.K.’s Brexit vote.

“All of our members take their responsibility very seriously and want to do more to tackle illegal content and activity online,” said Siada El Ramly, director general of Edima. “A European legal safeguard for service providers would give them the leeway to use their resources and technology in creative ways in order to do so.”The new rules would be an update to longstanding legal protections for internet firms, which protect platforms from liability for what’s posted on their sites, unless they have actual knowledge of its presence, for instance if a user flags it as harmful. Once platforms are made aware of illegal content, they’re obliged to act fast to remove it.

Actual Knowledge

Varied case law and lack of clarity over what should be considered “actual knowledge” has prevented platforms from being more proactive in dealing with bad content at the risk of facing legal repercussions for hosting it, El Ramly said.

Tech firms fear that by removing content voluntarily, such as with algorithms or other systems to detect infringements, they could be deemed to have actual knowledge, and therefore be liable for hosting the bad posts.

Under new rules, providers should still be held accountable for inaction if they receive a substantiated notification of a specific illegality, the association said.

©2020 Bloomberg L.P.