The leaders of big U.S. technology companies came to testify about legislation that shaped the modern internet. They ended up being accused by senators of abusing their power over political speech six days before the election.

“Who the hell elected you and put you in charge of what the media are allowed to report?” Senator Ted Cruz, a Republican from Texas, asked Twitter Inc. Chief Executive Officer Jack Dorsey in a congressional hearing on Wednesday.

While the fiercest attacks came from Republicans, Democrats on the Senate Commerce Committee also questioned Dorsey, Google’s Sundar Pichai and Facebook Inc.’s Mark Zuckerberg to determine whether Section 230 of the Communications Decency Act needs to be updated.

The rule grants online platforms some legal immunity from the content users post. It has come under intense scrutiny after Facebook and Twitter recently limited the online reach of a New York Post story about the family of former Vice President and Democratic presidential candidate Joe Biden, prompting claims of bias and censorship. Twitter backtracked, but the episode fueled tense exchanges during Wednesday’s hearing.

“The time has come for that free pass to end,” said Senator Roger Wicker, a Mississippi Republican who chairs the panel, referring to Section 230. The questions devolved into partisan bickering. Republicans criticized tech companies’ moderation of U.S. President Donald Trump’s posts, while Democrats said they feared the hearing was a Republican attempt to influence the CEOs days before the election.

“They seem to want to bully and browbeat the platforms here to try and tilt them in President Trump’s favor,” Senator Richard Blumenthal, a Democrat from Connecticut, said. “The timing seems inexplicable except to try and game the refs.”

The tech executives, who all appeared remotely, began their testimony by explaining the importance of the rule for building their businesses. Dorsey called Section 230 “the internet’s most important law for free speech and safety,” and argued that repealing it would lead to more policing of content, not less.

Dorsey was questioned about what Wicker called Twitter’s “double standard” for labeling posts from different world leaders, saying he’s collected dozens of examples of unequal application of the company’s policies. The CEO said real-world harm is one of the factors Twitter considers when deciding whether to put a warning on a specific tweet.

Section 230, passed into law as part of the CDA in 1996, lets internet companies wait to moderate users’ speech until after it is posted online, helping their platforms to grow unencumbered by constant legal challenges. Google’s YouTube video-sharing site doesn’t have to pre-screen the millions of videos uploaded daily, and Facebook doesn’t have to read every comment; they can let user posts flow freely and clean them up later if something bad happens.

“Our ability to provide access to a wide range of information is only possible because of existing legal frameworks,” Pichai said.

The hearing eventually turned to the actual language of Section 230, which grants platforms the ability to remove content they deem lewd, harassing or “otherwise objectionable,” among other criteria, from their services as long as they act “in good faith.” GOP lawmakers have said the language is too vague and protects the removal of political discourse.

Senator Shelley Moore Capito, a Republican from West Virginia, asked the executives about how they define the phrase “otherwise objectionable.” Multiple Republican lawmakers have introduced bills seeking to narrow the phrase, only allowing companies to remove particular categories of content such as those that promote terrorism or self-harm.

Zuckerberg argued that the current language enables companies like Facebook to capture more content that might be bullying or harassment, and Pichai said companies need flexibility. Many of Facebook and Twitter’s rules, for example, are worded in ways that give the company more leeway to address new and unexpected issues.

Capito also questioned Dorsey and Zuckerberg’s argument that repealing Section 230 would hurt startups. “How many small innovators and what kind of market share could they possibly have when we see the dominance of the three of you?” she challenged.

Zuckerberg said that Section 230 was instrumental when he started Facebook. “If we were subject to a larger number of content lawsuits because 230 didn’t exist, that would have likely made it prohibitive for me as a college student in a dorm room to get started with this enterprise,” he said.

Next week’s U.S. presidential election was a thread throughout the hearing, often cited as the topic for examples of good and bad content moderation. Senator Tom Udall, a Democrat from New Mexico, asked all three executives if Russia and other foreign countries continue to try to use their platforms to influence the election. All three CEOs said yes.

Twitter and Facebook have both recently removed networks of accounts that originated out of Iran and Russia. Facebook now makes monthly announcements about the networks it removes, with its latest update on Tuesday.

One idea that has been floated is to create a single system for moderating all the platforms together, ensuring they all play by the same rules. But the three tech companies have no incentives to create a shared system for identifying and moderating harmful information, said Sridhar Ramaswamy, the former head of the advertising business at Alphabet Inc.-owned Google, in a recent interview.

He argued that any new system shouldn’t be restricted to the largest tech companies, but should be deployed for the internet more broadly. “An outcome that gives any more responsibility to small teams is not a great outcome,” said Ramaswamy, who now runs the startup Neeva. “Because that then becomes a massive moat that no one else can cross.”