EU strikes deal to force tech giants to tackle disinformationComments
European institutions on Saturday morning reached a landmark agreement on the Digital Service Act (DSA) to impose rules on large tech companies.
Brussels hopes the new piece of legislation, which aims to hold large tech multinationals accountable for what is published on their platforms will set a global benchmark on how to regulate big tech.
It now needs to be formally approved by both the parliament and the council later this year.
The law primarily targets those collectively known as GAFAM — Google, Apple, Facebook (now Meta), Amazon and Microsoft — although it would also likely impact a handful of other groups such as social network TikTok.
It forces platforms such as Twitter, Facebook and YouTube to moderate the content they host, either in the field of e-commerce or disinformation.
It also forces them to create or improve mechanisms for users to flag problems, challenge content moderation decision and provide greater transparency including on "the algorithms used for recommending content of products to users."
The legislation also includes "mechanisms to adapt swiftly and efficiently" during critical moments such as pandemics or a war, as well as new safeguards to protect children from targeted advertising.
"With the DSA, the time of big online platforms behaving like they are 'too big to care' is coming to an end," Commissioner for the Internal Market Thierry Breton said in a statement. "The DSA is setting clear, harmonised obligations for platforms – proportionate to size, impact and risk."
Failure to adhere to the legislation opens company to fines of up to 6% of their global turnover or "even a ban on operating in the EU single market in case of repeated serious breaches," Breton stressed.
DSA 'needs teeth'
Hillary Clinton, a former US Secretary of State and presidential candidate, had, as negotiators were convening on Friday to finetune the final points, praised the EU for its work on the DSA, writing on Twitter: "For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it."
"I urge our transatlantic allies to push the Digital Services Act across the finish line and bolster global democracy before it's too late," she also said.
Alexandra Geese, a Green MEP and shadow rapporteur in the Committee on the Internal Market and Consumer Protection, stressed ahead of the negotiations that the DSA "needs teeth, it needs to put surveillance advertising and manipulative practices of online platforms in their place."
"The chances are good that the Digital Services Act will become a constitution for the internet, curbing hate, polarisation and disinformation, strengthening the rights of users and holding online platforms to account as never before. We are starting the big tech revolution with a strong law on digital services in the EU," she added.
"Sensitive personal data such as religion, skin colour or sexual orientation, as well as data of children and young people, should no longer be allowed to be tracked and used for advertising purposes. The Digital Services Act can be the beginning of a digital spring and the first, decisive step towards more democracy and freedom on the internet," she argued.
But some fear the DSA could have negative impacts.
Concerns over freedom of expression
"In negotiating the Digital Services Act, EU law-makers balanced tackling disinformation with protecting free speech. The Commission’s last-minute proposal for stricter regulation of tech platforms during crises undermines this balance," said Zach Meyers, a senior research fellow at the Centre for European Reform (CER) think tank.
Platforms have up to now been able to come up with their own strategies to fight disinformation, with varying degrees of success, with the debate centred mostly on how to mitigate the spread of "lawful but awful" content such as Russian propaganda. Most have started to flag whether the information comes from a verified source of information or whether the author is linked in any way to governments.
But now, Meyers explained, "the Commission argues it must be able to direct how platforms respond to crises like the Russian invasion of Ukraine, and the Commission wants the power to determine whether there is such a ‘crisis’ itself."
If it had these powers, the commission would undoubtedly feel pressured to force large platforms to simply remove pro-Russian ‘fake news’ – similarly to how it banned Russia Today and Sputnik.
However, requiring systemic removal of such information would inevitably have to rely on machine-learning tools, which are notoriously inaccurate, fail to have regard to context, and therefore often impact important, genuine content – such as parody and legitimate reporting.
"An emphasis on large-scale removal of harmful material is also likely to prompt users to flee to smaller and less scrupulous platforms. This explains why some online platforms are selective about the types of harmful content they disallow," he added.