Brussels is currently drafting legislation which will affect every sector of the EU economy, and the new rules are set to be the most restrictive on artificial intelligence to date.
Receiving a film recommendation on your favourite video-on-demand platform, unblocking your phone with your face, using autocorrect, and chatting with a chatbot: all of these are everyday examples of Artificial Intelligence (AI).
Despite sounding futuristic, AI is already being used by European citizens daily. Its opportunities can be endless, but there are also risks on the table.
"The potential of using AI in beneficial ways is enormous: less pollution, improved medical care, enhanced opportunities, better education and more ways to enable citizens to engage in their society," said Margrethe Vestager, Europe's competition commissioner who is also in charge of digital.
It can also be used "to fight terrorism and crime and enhance cybersecurity," Vestager underlined in a debate at the European's Parliament Special Committee on Artificial Intelligence.
Here is a look at five aspects of artificial intelligence and Europe's endeavour to regulate it.
1. What is Artificial Intelligence and why should we worry?
Artificial Intelligence is a technology that allows machines to perceive, analyse and learn from the environment.
With this information, they can predict and make their own decisions to reach specific goals. There are many applications for AI: it can be used for medical purposes like identifying the risk of having Alzheimer's, for agriculture, or for tracking employees while they work remotely.
Artificial Intelligence collects enormous amounts of data and uses information that replicates some of the biases society has already, so it could have many risks.
For instance, it can endanger privacy, put fundamental rights at risk, or increase discrimination against minorities.
2. What's the EU current stance on Artificial Intelligence?
"The higher the risk that a specific use of AI may cause to our lives, the stricter the rule," Vestager said last April to define the base of the European Commission's proposal for artificial intelligence.
The text divides AI into four categories based on the risk that it might pose for citizens.
Those technologies that pose minimal or no risk for citizens are considered minimum risk so they will be free to use and the new rules won't apply to them. This could include spam filters.
The limited risk category will have transparency obligations to let the citizens know that they are interacting with a machine to let them make informed decisions. An example of this is chatbots.
The high-risk category could include using AI for facial recognition, legal procedures and CV-sorting applications. These uses are controversial as they can potentially be harmful or have damaging implications for users. Therefore, the Commission expects that those systems will be "carefully assessed before being put on the market and throughout their lifecycle".
The last category is "unacceptable". This leaves out all AI systems that are "a clear threat to the safety, livelihoods and rights of people". An example can be social scoring by governments or using subliminal techniques.
For Sarah Chander, senior policy advisor at European Digital Rights (EDRi), the proposal of the European Commission "runs the risk of enabling really invasive surveilling and discriminatory AI systems that instead should be outright banned".
She said they could include technologies to deploy drones at borders or applications to evaluate social security benefits.
3. Is facial recognition going to be a reality in the European Union?
Facial recognition for security purposes is one of the hot topics in AI regulation.
A recent study commissioned by The Greens in the European Parliament shows that out of 27 Member states, 11 already use Artificial Intelligence including Germany, France, Hungary, The Netherlands, and Finland.
The Commission would consider this a high-risk use, but the European Parliament is divided on the topic. While some ask for a total ban others prefer a different take on it.
"We are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results," said MEP Petar Vitanov in mid-October.
But in a press conference this Tuesday MEP Axel Voss, rapporteur of another text about AI, stated that "even for facial recognition you can set safeguards in order that it can be used but not misused".
Facial recognition won't be left out of the legislation but a lot of regulation may be on a national level, experts say.
While some government ministries are "very positive" towards the "need for more human rights-based regulation", Chander says, the situation is not the same for interior ministries.
Those ministries who are in charge of policing "tend to be more sceptical against bans on facial recognition and predictive policing".
4. Can the risks of reproducing biases be avoided?
The European Commission says its goal is that AI systems "do not create or reproduce bias" so the requirements for high-risk AI systems need to be "robust".
Chander insists that many AI systems "will inherently increase discrimination because of the very nature of how it works".
The systems use "data and information from the past and tries to apply it to the future," Chander said. This might reproduce biases and discriminations that already exist in society.
Chander says that the Commission overlooked one key problem: the lack of opportunity for citizens to complain. She says there needs to be a way for potentially affected citizens to go to an authority to make sure their rights are respected.
5. And what about overregulating or underregulating?
Between wanting to set the world standard to the risk of overregulation that could prevent technology from developing, the European Union faces a great challenge with AI regulation.
Brussels hopes to set a global standard as with the GDPR, the data protection regulation.
For Vestager, the problems might come if the EU is unable to make it safe.
"My concern is that we risk that the technology won't be developed and used as widely as it could if we are not good at mitigating the risks connected to it," she said.
At the same time, the EU should find a balance between China's state use of AI and the US' approach with voluntary guidelines developed together with big tech companies.