Euronews spoke to Chris Marsden, the co-author of a study into how we can monitor efforts by social networks such as Facebook and Youtube to regulate disinformation on their own platforms.
Fake news, false reporting, disinformation, propaganda, lies. Call it what you want, but the phenomenon is as old as the concept of communication itself. In our digital age, however, the problem has become far more acute; social networks such as Facebook, Youtube and WhatsApp allow any information, authentic or otherwise, to spread instantly to people all over the world. The digitisation of disinformation has been blamed for skewing the results of elections and referenda and leading public opinion down the wrong path.
So what needs to be done in Europe to minimise the problem?
Euronews spoke with Chris Marsden, a lawyer and professor of media law at the University of Sussex in the UK. He, along with political scientist Trisha Meyer, presented the findings of a study they co-authored to The Panel for the Future of Science and Technology (STOA), a branch of the European Parliament that organises discussions and forums on emerging and tech-led topics of political relevance (see more below). The study looked at how governments can regulate the companies (Facebook, Youtube etc.) that themselves regulate disinformation spread on their own platforms. The study was commissioned by STOA because, in the words of the panel's First Vice-Chair Paul Rübig, "disinformation caused by global troll industries is against our democratically elected political parties."
Who should regulate fake news shared online?
For Mr. Marsden, the job of regulating fake news should not fall solely on the shoulders of national governments or supranational bodies like the EU. Neither, he believes, should the companies themselves be fully responsible for regulating themselves. Instead, he favours "co-regulation."
"Co-regulation means that you don't trust the companies to regulate themselves," he told Euronews. "It doesn't mean that you will impose a state law that says: 'you will do X,Y and Z' because everybody knows the internet is moving fast. But co-regulation says: 'you will do X or we will do Y'. In other words 'you will demonstrate your own ability to regulate fake news or we will do it for you'. So it's threatening the companies with action if they do not engage in proper regulation themselves."
Can Artificial Intelligence solve the fake news problem?
One argument being put forward by the owners of online platforms is that new technologies can solve the very problems they create. Chief among those technologies is machine learning or Artificial Intelligence. However the notion that AI is a 'miracle cure', the panacea for fake news is optimistic at best. Mr. Marsden believes that while AI can be useful for removing disinformation once it has been spotted, identifying it in the first place requires the human touch. This is especially true, he says, when national and cultural subtleties are involved.
What Europe, whether as a bloc or among its constituent national governments, needs to do "is to make sure that what companies do is actually engage European fact-checkers, European citizens to work on appeals, Europeans to work with their Artificial Intelligence programmes to actually resource properly their own attempts to stop fake news.
"You can't simply have people in California, or a bunch of people you've hired off the internet, from the Philippines or in India, to regulate European fake news. It has to be Europeans. They must have some kind of training in journalism and human rights law because they're being asked to make judgements on journalistic opinion and about freedom of expression.
"Unless we actually engage Europeans to work for these companies - and this will be unpopular because this is expensive - unless we do that we cannot solve this problem in Europe."
Fighting fake news does have a cost
While it would appear to be in the platform owners' best interests to reduce the dissemination of disinformation, the means of doing so proposed by Mr. Marsden and his co-author Trisha Meyer could prove to be a sticking point. As ever, it comes down to a question of money.
"The companies are going to claim wonderful, marvellous results from Artificial Intelligence because it is much cheaper to employ Artificial Intelligence to solve the fake news problem than it is to employ enough humans to solve the problem in addition to machine learning," says Marsden. "So the companies are going to say 'machine learning is fantastic, we can use AI in order to solve this problem, you don't need to ask us how many people we're employing'. The reality is, the only accurate way to deal with fake news is to have a hybrid where you have lots of human beings working on problems that AI has identified. But the human beings have to make the value judgements. That's expensive for [the likes of] Facebook and Youtube but it's absolutely essential. They will only make those investments in qualified European values, fact-checkers and fake news spotters if they're forced to do so by governments."
STOA: European Parliament's efforts to advance the teaching science and technology
Mr. Marsden and Mrs. Meyer presented their study at the meeting of the Panel for the Future of Science and Technology (STOAt) on December 13 in Strasbourg.
STOA is a political and administrative body of the European Parliament, governed by the parliament's Panel for the Future of Science and Technology, which comprises members from various parliamentary committees. It seeks to provide independent, objective analysis of science and technology issues and policy options for dealing with them.
One of its tasks is to organise public events in which politicians and representatives of scientific communities, and of society as a whole, discuss technological developments of political relevance to civil society.
The Annual Lecture is the high-point of STOA's calendar and brings together eminent speakers to talk about prominent topics of political relevance in the field of science and new technologies and to raise public awareness of science and technology issues.
The topic of this year's Annual Lecture was 'Quantum technologies, Artificial intelligence, cybersecurity: Catching up with the future' and was, says a Parliament spokesperson, a logical culmination of a series of STOA events and scientific projects linked to the development of Artificial Intelligence in recent years. Eva Kaili, a Greek MEP and chair of STOA told Euronews:
"Artificial intelligence is becoming more and more part of our everyday lives. For a second year STOA is dealing with how AI and the development of quantum physics will affect and shape our lives in the near future: how they will make it easier but also how many dangers AI will unveil for our societies if we don’t start building a strong ethical code."