New Zealand Prime Minister Jacinda Ardern and French president Emmanuel Macron will meet next month to try and eliminate violent extremist content online in the wake of the March 15 terrorist attacks in Christchurch New Zealand, said a press release by Ardern's office.
The meeting co-chaired by Ardern and Macron will aim to have world leaders and CEOs of tech companies agree to a pledge, called the Christchurch call, to eliminate terrorist and violent extremist content online.
A lone gunman killed 50 people at two mosques in Christchurch while live streaming the event on Facebook.
“The March 15 terrorist attacks saw social media used in an unprecedented way as a tool to promote an act of terrorism and hate. We are asking for a show of leadership to ensure social media cannot be used again the way it was in the March 15 terrorist attack,” said Ardern.
“It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism. This meeting presents an opportunity for an act of unity between governments and tech companies.
“Social media platforms can connect people in many very positive ways, and we all want this to continue.
“But for too long, it has also been possible to use these platforms to incite extremist violence and even to distribute images of that violence, as happened in Christchurch. This is what needs to change.”
The meeting is being organised alongside the "Tech for Humanity" meeting of G7 digital ministers, which France is chairing and France's "Tech for Good" summit also on May 15.
The Elysee Palace said in a statement that "the purpose of this call is to stop the promotion of terrorist and extremist content on social networks."
How did tech giants respond?
In response to the meeting, a Twitter spokesperson said they "welcomed" the opportunity to work with "peers" on finding a global solution to this problem.
A Google spokesperson did not specify whether the tech giant would be attending the meeting but said:
"We have zero tolerance for terrorist content on our platforms. Over the last few years, we have invested heavily in human review teams and smart technology that helps us quickly detect, review, and remove this type of content. We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content. We are working with government agencies, law enforcement and across the industry, including as a founding member of the Global Internet Forum To Counter Terrorism, to keep this type of content off our platforms. We will continue to engage on this crucial issue.”
"It was time for a radical shift in regulation"
In an interview with the Euronews' Cube team, Dr Hans-Jakob Schindler of the Counter Extremism Project welcomed the talks but argued it was time for a radical shift in regulation.
“It is now time to simply look at the tech industry the same way we look at the banking industry,” Dr Schindler said.
“If you don’t find it acceptable that terrorists have bank accounts, there is really no clear argument why we should find it acceptable that a terrorist should use a Skype account, or a Whatsapp account, or a Facebook account to propagate, organise, finance, transfer capabilities, or distribute something as harmful as bomb-making instruction manuals.”