Facebook's battle against disinformation adds a new weapon — the 'click gap'

Image: Mark Zuckerberg
Facebook CEO Mark Zuckerberg speaks during a press conference in Paris on May 23, 2018. Copyright Bertrand Guay AFP - Getty Images file
Copyright Bertrand Guay AFP - Getty Images file
By David Ingram and Ben Collins with NBC News Tech and Science News
Share this articleComments
Share this articleClose Button

The new "click-gap" signal will down-rank links to purported news articles that are receiving large amounts of traffic from Facebook but aren't linked to other parts of the web.

ADVERTISEMENT

MENLO PARK, Calif. — Facebook on Wednesday announced a series of initiatives to limit the spread of "problematic" content such as disinformation and extremism, but said the onus would remain on algorithms and users to discover that content.

Most notable among the initiatives is the use of what the company calls a "click-gap" signal, which will down-rank links to purported news articles that are receiving large amounts of traffic from Facebook but aren't linked to other parts of the web.

Facebook says it hopes that the new signal will decrease the prevalence of "low-quality content," like disinformation and clickbait.

Tessa Lyons, head of news feed integrity at Facebook, said the company's research has shown that where web traffic comes from is a sign of how authoritative a site is. Search engines use similar signs to determine the quality of websites.

"We think that it will help us to fight low-quality content that people don't want to see," she said of the change.

Facebook convened about two dozen journalists at its headquarters to explain the changes and issued a press release outlining the larger initiatives.

Facebook's initiative to tailor the service around smaller communities, like its "Groups" and messaging features, announced in March, has been the subject of scrutiny due to the spread of disinformation around various topics including vaccines and conspiracy theories. Groups that spread disinformation were found to have expanded their reachout of sight of public scrutiny.

The company said its changes would also help in "reducing the reach of Groups that repeatedly share misinformation."

Facebook did not announce a fundamental change in how to find violations of its policies. The social network's strategy has two parts: violations reported by users, and violations found by the company's artificial intelligence-driven software.

There are some exceptions where Facebook employees proactively look for violations, Andrea Saul, a company spokeswoman, said. Those exceptions include when security staff is investigating a terrorist network and wants to see how wide its reach is, and when Facebook wants to measure how much harmful content they're missing.

Karen Courington, a member of Facebook's product support operations team, said it would be impractical to have people proactively hunt for content violations considering the network's worldwide reach. The company said it now employs 30,000 people on its content moderation team.

"The challenge is the scale at which we're operating," she said.

Courington said it's also hard for company employees to moderate content from countries they are unfamiliar with.

"There are areas where we need the cultural context," she said.

Still, Facebook said the large number of small changes it has made to its algorithms add up to a big difference in the content that shows up in its feeds.

"The last few years at Facebook have been a philosophical shift in how we think about responsibility," Guy Rosen, Facebook's VP of site integrity, said.

The company also said it had made headway in reducing the overall amount of misinformation on its platform.

"Since the United States presidential elections in 2016, there's been a marked decrease in the amount of misinformation on the Facebook platform," Henry Silverman, a Facebook operations specialist who works on reducing the spread of false news stories, said.

ADVERTISEMENT

The company also announced that it will deploy a "clear history" feature, which will allow users to wipe their accounts clean of both content posted on the service and the ad preferences the company has accumulated on that user over the lifetime of the account.

The feature wasfirst announced in May 2018, but has been delayed. Rosen said launching the feature has taken so long because the company has been re-engineering how data is processed.

"We're working toward launching this in the fall," he said. "We want to be sure we're launching this right."

Facebook said it is getting better at using software to automatically detect hate speech, finding 52 percent of it proactively in the third quarter of 2018, up from 24 percent in the fourth quarter of 2017.

Share this articleComments

You might also like

The Mueller report shows just how great Facebook is for propaganda purveyors ǀ View

Apple launches faster chips, MacBook Pro laptops and cheaper Airpods - what are the upgrades?

What is the metaverse and why is Facebook betting big on it?