In the video, several of Google's top leaders voiced their disappointment at the election of President Donald Trump. Google co-founder Sergey Brin said he was "deeply offended" by Trump's win; Kent Walker, Google's senior vice president of global affairs, chalked up the rise of Trump-style populism to fear and xenophobia; Eileen Naughton, vice president of people operations, joked about employees moving to Canada and said that she had heard from conservative employees who felt uncomfortable voicing their views.
For years, tech companies have tried to avoid politics, claiming that they are merely platforms of communication that allow information to flow. Now, hardly a day goes by without a tech company becoming the subject of a political bias allegation. Conservatives see liberal leanings in how tech companies are policing misinformation, and liberals argue that the companies are over-correcting for such assertions.
Google confirmed the authenticity of the video, but said in an emailed statement that the views expressed were personal and that political biases do not influence how it builds products.
That claim is unlikely to convince conservatives, who in recent months have elevated their accusations of bias at tech companies from quiet murmurs to congressional hearings — and even a warning from the Justice Department.
"Nobody trusts anyone in 21st century America to be neutral," said Eli Pariser, author of "The Filter Bubble: What the Internet Is Hiding from You" and the former executive director of MoveOn.org, a liberal activist group. "I think we all know that every system has biases. They may not be intentional ones, but it's hard to come up with any ranking of content that doesn't favor some things over other things."
Tech companies have gradually adopted a more hands-on approach with their platforms, and those efforts ramped up quickly after the platforms were found to have been used to spread misinformation and propaganda during the 2016 U.S. election. Before that, they had been developing systems to moderate the posting of malicious content.
Kate Klonick, an assistant professor at the St. John's University law school in New York City, who has studied content moderation of tech companies, said pushing Google, Facebook and Twitter to take a more hands-off approach could have the unintended consequence of opening the floodgates for the internet's worst content, including porn, spam and illegal distribution of copyright material.
"It didn't seem like [conservatives] understood the long-term ramifications of what they were arguing for," Klonick said. "If you decided to hold these platforms to First Amendment standards, everything would stay up. A lot of content that conservatives and Republicans would not want would stay up."
Great power, great responsibility
Twitter CEO Jack Dorsey last Wednesday faced both extremes of the growing scrutiny. In front of the Senate Intelligence Committee, Dorsey, alongside Facebook Chief Operating Officer Sheryl Sandberg, faced questions about whether his company had done enough to crack down on misinformation and propaganda campaigns.
Hours later, without Sandberg, Dorsey testified before the House Energy and Commerce Committee, which questioned him about whether his company was biased against conservative media and personalities. Rep. Joe Barton, R-Texas, said the company's stance was clear.
"We wouldn't be having this discussion if there wasn't a general agreement that your company has discriminated against conservatives, most of whom happen to be Republican," Barton said during the hearing.
The accusations have now reached the highest levels of the U.S. government. Trump in late August tweeted that Google had "rigged" its search results to show negative news about him.
The bias claims come as the power of tech platforms in the news industry has become widely recognized — as has growing mistrust in how they choose what news to show people.
More than two-thirds of Americans now say they get news from social media, though they tend not to trust the news they see. Seventy-two percent of Republicans said they expect the news they see on social media to be "largely inaccurate" compared to 46 percent for Democrats, according to a recent study conducted by the Pew Research Center.
With just a few companies controlling what hundreds of millions of people read on social media, bans and algorithm biases can have major impacts. After Facebook, Google, Twitter and Apple banned conspiracy theorist and Infowars founder Alex Jones, his online audience was found by a New York Times analysisto have been cut in half.
Concerns about political bias at tech companies have percolated for years, brought into focus by a report from the digital tech publication Gizmodo in 2016 that said some Facebook employees that worked on its Trending Topics news product had suppressed conservative-leaning news websites. Facebook has since shut down the product.
Accusations of bias remained relatively limited until tech platforms came under attack for not doing enough to stop foreign propaganda campaigns from using their systems to spread disinformation and divisive political content in the run-up to the 2016 U.S. election.
At the same time, tech companies had also been working to address criticism that they did not do enough to protect users from harassment and hate speech. Since then, Facebook, Google and Twitter have made promises, hired moderators and rolled out products, likeartificial intelligence that can scan through thousands of postsin search of problematic content, in response to those issues.
Some tech companies have taken action. Dorsey reportedly met quietlywith high-profile Republicans to address claims of conservative bias. Facebook has brought in outside advisors to conduct an audit in search of bias against conservative voices and determine its impact on underrepresented communities.
Some of those efforts are causing concern among liberals that the companies are now doing too much to assuage conservatives.
ThinkProgress, a liberal site, on Tuesday leveled an accusation at Facebook and one of its third-party fact checkers: A ThinkProgress article had been inappropriately labeled as misinformation, limiting the article's audience.
The article in question, headlined "Brett Kavanaugh said he would kill Roe v. Wade last week and almost no one noticed," had been flagged by The Weekly Standard, a conservative news outlet that Facebook tapped in December to aid in its fact-checking efforts. Articles tagged as false by fact checkers are suppressed by Facebook's system.
The article contained an analysis of statements from Supreme Court nominee Brett Kavanaugh, linking his support for a judicial precedent to his statements on the right to an abortion, which have been a flashpoint in Kavanaugh's nomination process.
Ian Millhiser, who wrote the ThinkProgress article, said that Facebook's use of The Weekly Standard was troublesome.
"I think they brought in The Weekly Standard because they wanted to pander to conservatives," Millhiser said.
Facebook maintains that it does not have any political bias, and that it would be against its interest to do so.
"We do not suppress content on the basis of political viewpoint or prevent people from seeing what matters to them," said Andy Stone, a spokesperson for Facebook. "Doing so would be directly contrary to Facebook's mission and our business objectives."
Bias accusations comes as tech platforms are still working to crack down on foreign influence campaigns on their platforms that often traffic in politically divisive stories. Facebook and Twitter announced in August that they had thwarted a network connected to Iranian state media.
Facebook CEO Mark Zuckerberg posted a lengthy update on Thursday night about the company's efforts to prepare for the midterms while also nodding to some of the controversy that company has encountered.
"When it comes to free expression, thoughtful people come to different conclusions about the right balances," Zuckerberg wrote.
Sen. Mark Warner, D-Va., said in a statement that claims of bias are well worth discussing, but tech companies should be focused on making sure their platforms are not being weaponized by other countries.
"Some loud voices would have us believe that blocking hateful and disgusting content online should be equivalent to the real and latent threat of foreign intervention in our democratic process. It is not," he said in an emailed statement. "There is a debate to be had about the role social media companies play in our public discourse, but focusing on alleged content bias only helps distract and diminish the need to continue these important conversations."
Pariser, who is currently a fellow at the New America Foundation, a left-leaning think tank, and who co-founded the viral media website Upworthy, said he didn't see bias claims dying down anytime soon in part because tech companies continue to withhold just how their systems work.
"What you see is a lot of people looking at a black box and looking at what they perceive to be the politics and intentions of that black box and coming to their own conclusions," Pariser said. "People will do that when they can't see inside the box.