Euroviews. 'Regulation stifles innovation' is a misguided myth

Lawmakers vote on the Artificial Intelligence act at the European Parliament in Strasbourg, June 2023
Lawmakers vote on the Artificial Intelligence act at the European Parliament in Strasbourg, June 2023 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Michael Bąk
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Governments and tech companies should not fear regulation as it does not need to come at the cost of innovation. In fact, any policy development that is founded on this principle is ultimately one that can be more successful, Michael Bąk writes.

ADVERTISEMENT

We've heard the argument that regulation stifles innovation too many times. It’s not only boring, it’s a misguided myth. 

In reality, regulation is less about stifling innovation and more about channelling it responsibly — before it’s too late and irreversible harm is caused. Regulation keeps big tech in check and without it, we end up with endless apologies from tech executives who say “Just trust us”.

Two years ago, the European Commission proposed the first EU regulatory framework for AI, to ensure that AI systems can be analysed and classified according to the risk they pose to users. Now, with the EU AI act in play, progress has been made but there is still a lot to be done to prevent those irreversible harms.

Upcoming regulation needs to be more upstream — rather than classifying and regulating end outputs, regulation should be weighing in even earlier and trying to identify and address root challenges that might result in harmful outputs so that innovation can happen responsibly.

However, this is not a common ground for everyone, and that age-old argument still needs battling.

There are three clear arguments to evolve the regulation vs innovation debate - namely, evolving the power dynamics at play in setting the agenda. With that in mind, it’s first worth briefly touching on the ‘Brussels effect’ and the EU’s role in this.

Understanding ‘The Brussels Effect’

‘The Brussels Effect’ refers to the phenomenon whereby the EU ends up de facto regulating global markets by setting rules and standards with which other companies must comply if wanting to access the European market. 

We see this in areas like environmental regulation, data privacy and competition law. Regulations like GDPR largely become global standards, with other markets "copy-pasting" regulatory policies for their own markets.

From dangerous cheapfakes to sophisticated election deepfakes to AI bias, platforms and systems have become weaponised in ways that erode the integrity of information and democratic values worldwide.
European Commission President Ursula von der Leyen and European Council President Charles Michel address a media conference in Brussels, March 2024
European Commission President Ursula von der Leyen and European Council President Charles Michel address a media conference in Brussels, March 2024AP Photo/Geert Vanden Wijngaert

This effect can be positive in that lots of advanced regulation created by the EU influences other markets and spurs regulatory enforcement across the globe. 

That said, the EU is not perfect, and by taking the lead in regulating big macro areas, there is also an added responsibility that these may likely set the tone for the rest of the world too. 

Policymakers across the globe also have an added responsibility to scrutinise how these policies apply to their unique markets and contexts, and what adjustments or additional considerations are needed.

Which leads neatly to how the regulatory landscape needs to change.

Regulation is not a single source action

From dangerous cheapfakes to sophisticated election deepfakes to AI bias, platforms and systems have become weaponised in ways that erode the integrity of information and democratic values worldwide. 

The dangers of allowing private companies to self-regulate are now readily apparent thanks to social media. 

We also see the subsequent challenges of retrospective policymaking and trying to fix ubiquitous technologies that are already in the hands of users and a part of daily life.

Considering its major impact on the whole of society, passively deferring to tech companies to dictate and shape narratives around regulation is not a solution. Collaboration on a level playing field is essential.
An advertising banner with a slogan about AI is fixed at a building at the Davos Promenade, alongside the World Economic Forum in Davos, January 2024
An advertising banner with a slogan about AI is fixed at a building at the Davos Promenade, alongside the World Economic Forum in Davos, January 2024AP Photo/Markus Schreiber

With AI, the responsibility of regulation falls on multiple shoulders – tech companies, civil society, academics, and governments and policymakers. 

The reality is that these problems were first and foremost created by the tech companies and platforms themselves – either unintentionally or through benign neglect. 

However, citizens and governments must now also be mindful of the role they play in maintaining the integrity of the information space and the demands they must place on the titans of technology. 

ADVERTISEMENT

Considering its major impact on the whole of society, passively deferring to tech companies to dictate and shape narratives around regulation is not a solution. Collaboration on a level playing field is essential.

Balance and diversify perspectives

To develop regulation that channels innovation in a better direction, we need collaboration to be complemented by more balanced and diverse representation at the table whenever agendas are being set and policies are being discussed.

Private tech companies proactively engage and dominate in a broad spectrum of regulatory processes. 

Their goal is simple; to shape policy discourse in a way that is beneficial to shareholders, market power, and profits. When it comes to shaping regulatory guidelines, these private interests represent only one perspective and must not be mistaken as being broadly representational.

Whenever there are critical agenda-setting moments, where the red carpet is inevitably being rolled out for CEOs or tech executives, we should also be prepared to have an equal number of seats at the table for everyone else.
A plenary session at the AI Safety Summit at Bletchley Park in Milton Keynes, November 2023
A plenary session at the AI Safety Summit at Bletchley Park in Milton Keynes, November 2023AP Photo/Alastair Grant

We’ve seen this time and time again, most recently with the AI Summit in the UK — big (largely US) tech-dominated with the private sector overall representing one-third of seats at the agenda-setting table. 

ADVERTISEMENT

There was less than a handful of civil society participation, with no human rights, journalism, or media watchdog organisations invited.

Whenever there are critical agenda-setting moments, where the red carpet is inevitably being rolled out for CEOs or tech executives, we should also be prepared to have an equal number of seats at the table for everyone else. 

Namely, independent civic experts and Global Majority representation to bring forward the voices and lived experiences of their constituents.

Remember: There is no opt-out for AI

Unlike other technologies, there is no opt-out for AI, meaning regulation needs to consider this as something everyone needs protection from. 

Often, that means looking out for those unable to fully participate in the discussion.

ADVERTISEMENT

Drawing on broad consultations around the world, where there's not a lot of local philanthropy that's interested in funding political work, policy regulation around AI is inevitably about protecting fundamental freedoms and human rights. 

For fragile democracies, the "wait and see" approach we have seen in the past with GDPR will likely continue with AI regulation.
Women learn to use a chatbot powered by artificial intelligence at the local women's organization’s office in Mumbai, February 2024
Women learn to use a chatbot powered by artificial intelligence at the local women's organization’s office in Mumbai, February 2024AP Photo/Rafiq Maqbool

For fragile democracies, the "wait and see" approach we have seen in the past with GDPR will likely continue with AI regulation. 

We can only expect that policies will follow suit based on the decisions of larger organisations such as the OECD and the EU, taking into account only some local needs but largely cut and pasting. We need to help.

Innovating policymaking

Technology can benefit all citizens through transparency, accountability, and democratic participation. The want for innovation is not all bad by any means. 

In fact, for the global information space, innovation is key in prioritising access to reliable information. 

ADVERTISEMENT

However, the issues we face are complex and largely stem back to a lack of globally established and inclusive regulation. Fair regulation can be as simple as involving participation from civil society groups, citizens, human rights defenders and elected officials when driving discussions that shape our regulatory regimes. 

With its network of civil society organisations and direct links to 52 countries through the Partnership on Information and Democracy, the Forum on Information and Democracy continues to play a critical part in this direction.

Governments and tech companies should not fear regulation as it does not need to come at the cost of innovation. 

In fact, any policy development process that is founded on this principle is ultimately one that can be more successful. 

The outcome is technology that serves our societies and supports prosperity, peace and democracy. 

ADVERTISEMENT

Michael Bąk is Executive Director of the Forum on Information and Democracy.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like