Euroviews. Europe will be left behind if it focuses on ethics and not keeping pace in AI development ǀ View

Europe will be left behind if it focuses on ethics and not keeping pace in AI development ǀ View
Copyright REUTERS/Fabrizio Bensch
Copyright REUTERS/Fabrizio Bensch
By Daniel Castro
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

AI will be important to Europe’s future, but ethics alone will not be enough for the EU to pull ahead of its competitors.

ADVERTISEMENT

President-elect of the European Commission, Ursula von der Leyen made clear in her recently unveiled policy agenda, that not only will artificial intelligence (AI) be a key component of European digital strategy, but the cornerstone of the European AI plan will be to develop “AI made in Europe” that is more ethical than AI made anywhere else in the world.

What this means is not always clear, since there is no universal consensus on ethics. However, most European policymakers are less concerned about the “what” and more about the “why.” As explained by former Vice-President for the Digital Single Market, Andrus Ansip, “Ethical AI is a win-win proposition that can become a competitive advantage for Europe.” This idea that Europe can become the global leader in AI simply by creating the most ethical AI systems, rather than by competing to build the best-performing ones, has become the conventional wisdom in Brussels, repeated ad nauseum by those tasked with charting a course for Europe’s AI future. But it is a delusion built on three fallacies: that there is a market for AI that is ethical-by-design, that other countries are not interested in AI ethics, and that Europeans have a competitive advantage in producing AI systems that are more ethical than those produced elsewhere.

Developing ethical AI systems is important, but the notion that the EU will be able to dominate the market solely by developing the most ethical AI systems is not supported by evidence.
Daniel Castro
Director of the Center for Data Innovation

The first problem with the EU’s plan to distinguish itself in AI by focusing on building trustworthy systems is that there is virtually no evidence suggesting that consumers are demanding more ethical AI systems. What evidence exists suggests such a market would be small. For example, a survey from the Center for Data Innovation found that only 19% of Americans agreed with the statement, “If I am buying a smart toaster (i.e. a toaster controllable by a mobile app), I am willing to pay more for one that is certified as ‘ethical by design.’” While some consumers may pay lip service to ethical design, this does not match their behaviour.

For example, even among consumers who say they distrust the Internet, only 12% report using technological tools to protect their personal data, such as virtual private networks, according to a worldwide Ipsos survey. Instead, price and quality will likely continue to be the most important factors for consumers as they make purchasing decisions. Most patients, for instance, are more concerned about whether the AI system diagnosing their symptoms is reliable and accurate, not whether it can offer an explanation.

The second problem is that, even if consumers demand more trustworthy AI systems, the EU is not alone in considering the ethics of AI. Many countries have policies to address these issues. Indeed, Europe’s two main competitors in AI—the United States and China—have both made multiple commitments to ensuring AI is ethical. President Trump, for instance, signed an executive order on AI that supports a vision of developing “reliable, robust, and trustworthy systems that use AI technologies” and has launched a plan for developing technical standards for AI, including to support its ethical use.

Moreover, US agencies such as the Department of Defense and the Department of Transportation have launched their own initiatives to ensure ethical use of AI within their respective fields. In China, the government-backed Beijing Academy of Artificial Intelligence has developed the Beijing AI Principles that rivals those of any other country, and the Chinese Association for Artificial Intelligence has also developed its own ethics guidelines. Many non-European countries, including the United States, have signed on to the Organisation for Economic Co-operation and Development’s (OECD) AI Principles focused on “responsible stewardship of trustworthy AI.”

The last - and perhaps most glaring - problem is that Europeans do not have a competitive advantage in producing ethical AI systems. Indeed, the opposite appears to be true: EU researchers are lagging behind their US counterparts in addressing ethical AI. If you look at all the research papers that were accepted at the 2018 FAT/ML (Fairness, Accuracy and Transparency in Machine Learning) conference—the premier research conference on the topic—you find that only 5% of the authors were European, despite the fact that the conference was hosted in Stockholm. In contrast, 85% of the authors were American. If the EU is serious about AI ethics then it needs to walk the walk, not just talk the talk.

Developing ethical AI systems is important, but the notion that the EU will be able to dominate the market solely by developing the most ethical AI systems is not supported by evidence. After all, no country is advocating for AI systems that will be unfair, harmful or coercive. One risk of Europe putting too much focus on ethics is that it will do so at the expense of other goals, such as prioritising research on explainable algorithms rather than accurate ones.

The real market differentiators in AI are likely to be around hard performance metrics, such as accuracy, safety, and reliability, or even softer metrics such as usability and customer satisfaction, not more nebulous concepts like ethics. Another risk of Europe’s fixation on AI ethics is that it will be used as a smoke screen to push for more regulation of the technology or for technology import barriers, which would stall development and adoption.

AI will be important to Europe’s future, but ethics alone will not be enough for the EU to pull ahead of its competitors. The future global leaders in AI will ultimately shape its direction, and Europe will be left behind if it is not able to keep pace in the areas that matter most.

Daniel Castro is the director of the Center for Data Innovation

_____

Are you a recognised expert in your field? At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Eurasian Media Forum: is artificial intelligence the renaissance of humanity?

EU issues ethics guidelines for artificial intelligence

The EU’s “softball” approach to Artificial Intelligence will lose to China’s “hardball” ǀ View