President Donald Trump has long insisted that he and his allies are treated unfairly on social media. He repeated that claim ahead of the Social Media Summit at the White House, where he led a discussion on suppression and bias. But there's still not much evidence to prove that accusation — or much independent research on which way social media bias goes in the first place.
The White House did not release a list of attendees prior to the event, but various reports show most guests were popular conservative and Trump supporters on social media, or conservative think thanks like Turning Point USA. It's worth noting that representatives from Facebook, Google and Twitter said they weren't invited.
There is some evidence to support allegations that social media giants lean more left than right, and therefore put more scrutiny on conservative content than liberal content. According to a survey of nearly 2000 U.S. tech workers, commissioned by the conservative non-profit Lincoln Network, about 28% of respondents identified as conservative or very conservative, while 33% identified as liberal or very liberal. 33% of people also identified as moderate.
But representation alone isn't enough to prove systematic bias, and in most cases, it's hard to prove that content or accounts are taken down solely for their political skew. In an April Congressional hearing on Big Tech's bias, Sen. Ted Cruz noted that "much of the argument in this topic is anecdotal [...] nobody knows what the raw data is in terms of bias."
And there is also research from the progressive nonprofit Media Matters that shows conservatives have as much, if not more of a presence than liberals, on social media. In a study of 463 Facebook pages with at least 500,000 likes that post about American politics, Media Matters found right-leaning and left-leaning pages had almost identical interaction rates.
In other words, getting a nonpartisan look at bias or censorship can still be difficult. Jessica J. Gonzalez, Vice President of Strategy and Senior Counsel with Free Press, says companies like Facebook, Twitter and Google could help clear this up if they shared more about who gets taken off their platforms, or why.
"Part of our critique is that it’s really hard to track what’s happening with content moderation because the companies aren’t being transparent enough with their users; that they’re not providing any data on what’s being taken down; why it’s being taken down; how many takedown requests are they receiving, how many are they granting, how many are they rejecting," Gonzalez said. "The purpose of which is not just so individual users can understand why their stuff is being taken down, but so that 1activists can understand."