Despite efforts by major social media platforms like Facebook Inc. and Twitter Inc. to combat the spread of false information during the U.S. election campaign, some experts say these technology companies could have done more earlier on, and that they may still be ill-equipped for what might happen on Nov. 3.  

“The major social media firms are being shown up once again to not have done as much as they can,” says Dipayan Ghosh, co-director of the Digital Platforms & Democracy Project at the Harvard Kennedy School.

“We have heard reports of millions of people coming across misinformation and conspiracy theories over the major social media platforms that we all use – and that is just not good enough. The prevalence of political disinformation affects election results.”

While social media companies have a better sense now of how malicious actors can hijack their platforms four years after the 2016 presidential election, they have not yet fully taken control in part because there is much more information circulating today, according to Ghosh.

“I think 2020 is even worse in terms of the volume and spread of misinformation than 2016 was,” he says.

One expert says social media companies haven’t made the appropriate changes to stop the spread of misinformation, which refers to inaccurate information that is spread regardless of intent to mislead and disinformation, deliberately deceptive information.

“There is little evidence that internet platforms have learned the right lessons. They appear to view criticism as a public relations problem, rather than a real issue that requires changes to their business practices,” says Roger McNamee, founding partner of the venture capital firm Elevation Partners, and an early investor in Facebook.



Earlier this month, Facebook banned all QAnon accounts from its platform as the movement picked up steam on social media. In July, Twitter began removing thousands of accounts associated with the far-right conspiracy theory. In September, Twitter also removed approximately 130 accounts originating from Iran that were attempting to disrupt the public conversation during the first U.S. presidential debate. While these are substantial moves, McNamee believes it is just a temporary fix to a broader problem.

“The changes they have made are largely cosmetic,” says McNamee. “Harmful content such as hate speech, disinformation, and conspiracy theories is the lubricant for the business model of internet platforms. They do not want to change their business model for fear of losing profits and power.”

It is worth noting, however, that platforms like Facebook have made some key changes since 2016. These improvements include: the of launch of a third-party fact-checking program, stronger labeling of false content, the strengthening of its voter suppression policies, more restrictions on inflammatory content in ads, and the prevention of ads that prematurely claim victory or attempt to delegitimize the 2020 election.

Philip Mai, co-director of the Social Media Lab at Ryerson University’s Ted Rogers School of Management, points to the power social media influencers can have when it comes to strengthening and spreading conspiracy theories.

A study Mai and a colleague conducted over the spring, regarding a conspiracy aimed at positioning the COVID-19 pandemic as a hoax, determined that partisan influencers — particularly those from the extreme right — enabled the conspiracy to pick up steam.

“Social media influencers might prove to be the misinformation Trojan horses in this election cycle, helping to launder and amplify dis- and misinformation,” he says.

So what can social media users expect leading up to the election, election day itself, and in the days that follow?

Mai notes that because COVID-19 is driving more Americans to vote by mail, one of the biggest concerns is the potential disruption of the normal business of voting, and bad actors turning any clerical or mechanical mishap into evidence of widespread conspiracy.

“For example, I expect to see posts from anonymous accounts and from supporters of both candidates with supposed proof —pictures, videos — of how people’s votes are not being counted or stolen,” says Mai.

Additionally, he thinks that there will be a flood of false information posted simply for the purpose of confusion and chaos.

One way to deal with this is to roll out retroactive corrections for electoral misinformation, Mai suggests. Facebook and Twitter are already doing this for content related to COVID-19.

“With the election, speed is the enemy. A post from politicians or influencers can go viral in mere seconds, so labeling the post hours or even days after the fact and restricting sharing cannot undo the damage already done,” Mai warns.

Amid ongoing concerns about the content created or disseminated on these social media platforms, regulating big technology companies remains top of mind. This was evident during the U.S. Senate Commerce Committee hearing last Wednesday when the CEOs of Facebook, Twitter and Google were grilled about their content moderation practices.

“The litigation will take years, but ultimately I would not be surprised to see a settlement that involves larger tech companies being split into smaller components,” says Jim Anderson, CEO of social media optimization firm SocialFlow.

However, Anderson notes that while regulation may sound good in theory, it could pose a challenge in reality.

“Many of the proposed remedies – such as making social media platforms legally responsible for the content on their platforms – could actually escalate the conflict,” he says.