Fake News and Disinformation in the EU – The need for a stronger and more efficient approach

By Luciano MORGANTI

Luciano MORGANTI, Visiting Professor of the College of Europe and trainer of the executive workshops ‘EU Fact Checking’ and ‘EU Fact Finding’ , reflects in this column on the approach that the European Commission has taken to fight online disinformation and fake news. After reviewing the Commission’s approach, he suggests that a soft approach might not be the right one to fight disinformation and push forward European integration. His reflection is the result of the research work done in the framework of Mediaroad.eu, a Coordination and Support Project funded by Horizon 2020.

The European Commission seems to take the fight against Disinformation and Fake News in the EU seriously.

The last years have seen a number of initiatives and policy documents going in this direction. Since 2018, the European Commission launched a Eurobarometer public opinion survey and a public consultation, established a High-level Expert Group, and, later, published a Communication and a Code of Conduct.

Eurobarometer 464,[1] of February 2018, has shown that most people see fake news as a problem, but people remain unclear on who is responsible for stopping the spread of fake news. The public consultation on the other hand shows a consensus among respondents pointing at online platforms and social media as responsible for the spread of fake news and these platforms could also play a bigger role in limiting such spread, rather than relying only on their users. Something should be done to reduce the spread of disinformation online. This should however never lead to censorship.

The Report of the High-level Group (HLEG) of experts set up by the European Commission to better grasp the problem and potential solutions, released on March 2018,[2] emphasized the need to broaden the discussion going beyond fake news to discuss the wider issue of disinformation. The Group stresses the need for a multi-dimensional approach with “stakeholders collaborating in a manner that protects and promotes freedom of expression, media freedom and media pluralism” and avoiding fragmentation of the Internet and harmful consequences for its technical functioning. Their proposed approach aimed to (i) enhance transparency, (ii) promote media and information literacy, (iii) develop tools for empowering users and journalists, (iv) safeguard the diversity and sustainability of the European news ecosystem, and (v) promote continuous research on the impact of disinformation in Europe. In the short term, the High Level Expert Group suggested a self-regulatory approach based on a multi-stakeholders engagement process, with all relevant stakeholders adhering to a Code of Practice. While playing a facilitating role, public authorities should support the development of a network of independent European Centres for research on disinformation, managed by a Centre of Excellence established by the European Commission. In the longer term, the Expert Group recommended instead a set of complementary measures designed to support the diversity and sustainability of the news media ecosystem, and appropriate initiatives in the field of media and information literacy to foster a critical approach and a responsible behaviour across European citizens.

Following the HLEG Report, the Commission Communication Tackling online disinformation: a European Approach, of April 2018,[3]  sees disinformation as a symptom of a wider phenomenon that can erode the trust in political institutions and the media and hence can harm our democracies. The Communication also recognizes that policy institutions should refrain from interference and censorship in relation to freedom of expression and media freedom and should ensure a favourable environment for an inclusive and pluralistic public debate. At the same time, the Commission states that online platforms play a key role in spreading and amplifying online disinformation and so do users of social media themselves. The Communication identifies transparency, diversity, credibility and inclusiveness as overarching principles and objectives to guide these actions. Building on the previously described gathered input, and complementing the General Data Protection Regulation, the Commission intends to take five specific actions to tackle the dissemination of online disinformation: a more transparent, trustworthy and accountable online ecosystem; secure and resilient election processes; foster education and media literacy; support quality journalism; and use strategic communication to counter internal and external disinformation threats.

Finally, in autumn 2018, representatives of online platforms, leading social networks, advertisers and advertising industries agreed on a self-regulatory Code of Practice[4] to address the spread of online disinformation and fake news. The Code aims at achieving the objectives set out by the Commission Communication by establishing a wide range of commitments, from transparency in political advertising to closure of fake accounts and demonetization of disinformation. The Code itself includes an Annex identifying best practices that the signatories of the Code should apply to implement the Code’s commitments. In the following months signatories of the Code presented their roadmaps to implement the Code. As can be read on the website of the Commission, online platforms and trade associations representing the advertising sector that agreed to the Code of Conduct submitted a report in January 2019 setting out the state of the art measures taken to comply with their commitments under the Code of Conduct. Facebook, Google and Twitter reported then on a monthly basis to the Commission about the actions they undertook to improve scrutiny of ad placements, ensure transparency of political and issue-based advertising and to tackle fake accounts and malicious use of bots.

While the Communication provides many interesting ideas which could have a positive impact in the long term, and the adherence to the Code of Conduct is an encouraging and unprecedented novelty in our current digital world, the impression is that the Communication and the related Code of Conduct seems to foster a soft approach. This is understandable given the sensitive nature of any regulation over content, but, can a soft approach be effective in the fight against disinformation and help the EU integration process in times during which citizens need more than ever to access and process factual information so to be able to take responsible choices? Can it help to really fight disinformation and fake news? So, the question remains: why has a stronger approach (such as a legal one) been excluded from the discussion altogether? This “stronger” path was suggested, amongst others, by the European Parliament in its resolution on Online Platforms and the Digital Single Market of June 2017.[5] In such resolution, the European Parliament called the Commission “to verify the possibility of legislative intervention to limit the dissemination and spreading of fake content.” The HLEG itself suggested adhering to the “follow the money” principle. Online platforms (notably social media) should be obliged to make visible who pays for what information. If a strict regulatory intervention is to be excluded, why not do something about the commercial mechanisms sustaining the spread of disinformation? Also, a stronger approach could (should?) have been proposed and a standardised way of enforcing compliance to transparency could have been put in place, especially for social media platforms active in the EU. To put it with simpler words, it is legitimate to doubt the rather voluntary approach chosen by the European Commission – especially considering that there is no concrete solution proposed in case social platforms do not comply with their promises.