Facebook to restrict livestream feature after Christchurch attack

Facebook to restrict livestream feature after Christchurch attack
A Facebook logo displayed in a start-up companies gathering at Paris' Station F on Jan. 17, 2017. Copyright Thibault Camus AP file
Copyright Thibault Camus AP file
By Ben Collins with NBC News Tech and Science News
Share this articleComments
Share this articleClose Button

The company said it would prevent users who violate "our most serious policies" from using Facebook Live for "set periods of time — for example 30 days — starting on their first offense."

ADVERTISEMENT

Facebook announced Tuesday night that it will restrict who can use the company's livestream tool in an effort to limit its use to "cause harm or spread hate," a company executive said.

Facebook said that users who break particular rules will be barred from using the livestream feature, called Facebook Live. Facebook said its "Dangerous Organizations and Individuals" rules would be included in the new policy.

The company said it would prevent users who violate "our most serious policies" from using Facebook Live for "set periods of time — for example 30 days — starting on their first offense."

"We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook," Guy Rosen, Facebook's vice president of integrity, wrote in a press release.

The announcement comes two months afterthe mosque attacks in Christchurch, New Zealand, that left 51 people dead. The shooter, a white supremacist from Australia, livestreamed the terror attack on Facebook, and led followers to the video with a post on the far-right message board 8chan.

It is unclear if the new detection mechanisms would have disallowed the Christchurch shooter from starting a livestream, and Facebook did not comment on if this use would have been prevented.

Facebook sent out its press release just hours before New Zealand's Prime Minister Jacinda Ardern is set to meet with French Prime Minister Emmanuel Macron about stopping the spread of extremist content online.

Facebook also announced a $7.5 million partnership with three universities to "improve image and video analysis technology" of terrorist videos.

The research will be focused on detecting "manipulated media across images, video and audio" and "distinguishing between unwitting posters and adversaries who intentionally manipulate videos and photographs." It will be conducted with the help of Cornell University, The University of Maryland, and the University of California, Berkeley.

Facebook's actions come as the platform's role in spreading hate speech remains under intense scrutiny. Weeks after the Christchurch attacks, New Zealand's Privacy Commissioner John Edwards called Facebook "morally bankrupt pathological liars" who "facilitate foreign undermining of democratic institution" in later deleted tweets.

"Facebook cannot be trusted," he added.

Share this articleComments

You might also like

Apple launches faster chips, MacBook Pro laptops and cheaper Airpods - what are the upgrades?

What is the metaverse and why is Facebook betting big on it?

Euronews Debates | Profit vs public good: How can innovation benefit everyone?