EU’s “strengthened” Code of Practice on Disinformation is signed by 34 tech firms, platforms and civil groups to beef up battle against fake news
Big name tech firms have signed up to the European Union’s “strengthened” Code of Practice on Disinformation.
The EU says that its “new stronger and more comprehensive” Code of Practice takes into account lessons learnt from the Covid-19 pandemic and Russia’s illegal war in Ukraine.
Old Code of Practice
All of these firms were asked to report monthly on their actions against disinformation, but in February 2019 the Commission rebuked Facebook, Google and Twitter over their efforts to crack down on fake news.
Then in October 2019 the European Commission said that tech giants needed to do more to tackle fake news, after they had submitted self-assessment reports.
Signatures to the original code were also asked to tighten up on ad placement, transparency of political advertising, closure of fake accounts, and identifying automated bots.
The fact the EU has revised its Code of Practice on Disinformation should come as no surprise.
The Commission, while it said the original 2018 code had brought positive outcomes, by increasing platform accountability and public scrutiny of disinformation counter measures, was still frustrated by certain aspects of it.
Notably the European Commission previously said that the quality of the information disclosed by the Code’s signatories was still insufficient, and shortcomings limited the effectiveness of the Code.
These shortcomings included:
- the absence of relevant key performance indicators (KPIs) to assess the effectiveness of platforms’ policies to counter the phenomenon;
- the lack of clearer procedures, commonly shared definition and more precise commitments;
- the lack of access to data allowing for an independent evaluation of emerging trends and threats posed by online disinformation;
- missing structured cooperation between platforms and the research community;
- and the need to involve other relevant stakeholders, in particular from the advertising sector.
Revised Code of Practice
The code comes after years of concern that online platforms were being used to spread disinformation.
Of late examples included Covid misinformation of social media platforms, and Russia’s ongoing propaganda in the lead up to its second invasion of the sovereign nation of Ukraine on 24 February 2022.
The EC said the “reinforced Code builds on the first Code of Practice of 2018”, and “sets out extensive and precise commitments by platforms and industry to fight disinformation and marks another important step for a more transparent, safe and trustworthy online environment.”
“This new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly,” said Věra Jourová, VP for Values and Transparency.
“We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages,” said Jourová.
“Users will also have better tools to flag disinformation and understand what they are seeing,” said Jourov. “The new Code will also reduce financial incentives for disseminating disinformation and allow researchers to access to platforms’ data more easily.”
Those platforms breaking the EU Code risk fines of up to 6 percent of their global turnover.
Together with the recently agreed Digital Services Act and the upcoming legislation on transparency and targeting of political advertising, the strengthened Code of Practice is an essential part of the Commission’s toolbox for fighting the spread of disinformation in the EU.
The 34 signatories include major online platforms, including Meta, Google, Twitter, TikTok, and Microsoft, as well as a variety of other players like smaller or specialised platforms, the online ad industry, ad-tech companies, fact-checkers, civil society or that offer specific expertise and solutions to fight disinformation.
The EU said the strengthened Code aims to “address the shortcomings of the previous Code, with stronger and more granular commitments and measures, which build on the operational lessons learnt in the past years.”
Specifically, the new Code contains commitments to:
- Broaden participation: the Code is not just for big platforms, but also involves a variety of diverse players with a role in mitigating the spread of disinformation, and more signatories are welcome to join;
- Cut financial incentives for spreading disinformation by ensuring that purveyors of disinformation do not benefit from advertising revenues;
- Cover new manipulative behaviours such as fake accounts, bots or malicious deep fakes spreading disinformation;
- Empower users with better tools to recognise, understand and flag disinformation;
- Expand fact-checking in all EU countries and all its languages, while making sure fact-checkers are fairly rewarded for their work;
- Ensure transparent political advertising by allowing users to easily recognise political ads thanks to better labelling and information on sponsors, spend and display period;
- Better support researchers by giving them better access to platforms’ data;
- Evaluate its own impact through a strong monitoring framework and regular reporting from platforms on how they’re implementing their commitments;
- Set up a Transparency Centre and Task Force for an easy and transparent overview of the implementation of the Code, keeping it future-proof and fit for purpose.
Finally, the Code aims to become recognised as a Code of Conduct under the Digital Services Act to mitigate the risks stemming from disinformation for Very Large Online Platforms.
Signatories will have 6 months to implement the commitments and measures to which they have signed up.
At the beginning of 2023, they will provide the Commission with their first implementation reports.