The chief executive officers of Facebook Inc., Twitter Inc. and Google parent Alphabet Inc. faced a fresh round of questioning in the U.S. House over how they police falsehoods on their internet services, with lawmakers focusing on misleading information on COVID-19, vaccines and the election.

Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Google’s Sundar Pichai appeared on Thursday to answer renewed inquiries about content-moderation policies from members of two House Energy and Commerce subcommittees during a virtual hearing examining social media’s role in promoting extremism and disinformation. Some lawmakers also focused on the impact of the companies’ products on children.

“The witnesses here today have demonstrated time and again that promises to self-regulate don’t work,” said Jan Schakowsky, chair of the Consumer Protection and Commerce Subcommittee, in an opening statement. “They must be held accountable for allowing disinformation and misinformation to spread across their platforms, infect our public discourse, and threaten our democracy.”

While some lawmakers have been seeking tighter regulations of online content for years, pressure is increasing on tech companies to more aggressively curtail violent and misleading material on their platforms following the Jan. 6 riot at the U.S. Capitol, which left five people dead and dozens more injured.

“People died that day, and hundreds were seriously injured,” said Representative Mike Doyle, a Pennsylvania Democrat. “That attack and the movement that motivated it started and was nourished on your platforms. Your platforms suggested groups people should join, videos they should view, and posts they should like.”

Supporters of President Donald Trump used social media sites, particularly alternative platforms such as Parler and Gab, to organize the riot, which was held in protest of Trump’s loss to President Joseph Biden in the November election.

In recent months, Democrats have also been pushing the tech giants to do more to rid conspiracy theories about COVID-19 and the vaccine that prevents its symptoms from their websites.

Thursday’s hearing is likely to spark renewed debate in Washington over whether Congress should weaken or even revoke a decades-old legal shield that protects social media companies from liability for user-generated content posted on their sites, known as Section 230 of the Communications Decency Act of 1996.

While both parties have proposed bills to reform the law, they have sparred over how tech companies should change their content moderation practices. Democrats want internet companies to do more to curb the spread of misinformation, hate speech and offensive content. Republicans have threatened to weaken the legal protection for tech companies over unfounded accusations that social media firms are systematically censoring conservative viewpoints.

Representative Cathy McMorris Rodgers, a Washington Republican, criticized the power of tech companies’ algorithms to determine what children see online.

“Over 20 years ago, before we knew what Big Tech would become, Congress gave you liability protections. I want to know, why do you think you still deserve those protections today?” said McMorris Rodgers, the committee’s top Republican. “What will it take for your business model to stop harming children?”

The tech executives also differ in their support for making changes to the legal shield. In his prepared opening remarks, Zuckerberg said he supports making the liability protection conditional on having systems in place for identifying and removing unlawful material. Under Zuckerberg’s proposal, a third party would determine whether a company’s systems are adequate.

The legal shield “would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing -- sometimes for contradictory reasons -- that the law is doing more harm than good,” Zuckerberg, who runs the world’s largest social network, said in his written testimony.

He added that platforms “should not be held liable if a particular piece of content evades its detection -- that would be impractical for platforms with billions of posts per day.”

Google’s Pichai, whose company owns the most popular internet search engine, signaled that he is opposed to any changes to the law. Reforming it or repealing it altogether “would have unintended consequences -- harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges,” he said in prepared testimony.

Instead, Pichai wants companies to focus on “developing content policies that are clear and accessible,” such as notifying users if their work is removed and giving them ways to appeal such decisions.

Several bills being considered by Congress seek to weaken the legal shield in an effort to encourage the platforms to bolster their content moderation practices. Democratic senators, led by Mark Warner of Virginia, introduced the SAFE TECH Act, which would hold companies liable for content violating laws pertaining to civil rights, international human rights, stalking, harassment or intimidation.

And a bipartisan bill -- the PACT Act -- from Democratic Senator Brian Schatz of Hawaii and Republican Senator John Thune of South Dakota would require large tech companies to remove content within four days if notified by a court order that the content is illegal.