HomeInsightsEU AI Act and Deep Fakes: Towards a future Code of Practice on AI Output Transparency

The policy debate around deep fakes largely revolves around their malicious uses. There is a discernible global trend toward the adoption of legislation intended to address AI generated or manipulated images, particularly where such content is used for criminal purposes (e.g. pornographic deep fakes). The EU AI Act aligns with this broader regulatory movement. However, the implications for creative companies in the entertainment sector should also be considered. The current process may provide an opportunity to ensure that the legal framework appropriately distinguishes between harmful manipulation and creative storytelling by the content sector.

Under the EU AI Act, any AI-generated or manipulated content that appears to be authentic must be clearly disclosed as such. The Regulation refers to any such imagery, as ‘deepfakes’. The intention is to protect the public from deception. But for the creative industries, where simulated reality and artistic transformation are the essence of the craft, compliance with these obligations is like fitting a square peg in a round hole.

The EU legislator included a derogation for evidently artistic, creative, satirical, fictional or similar works or programmes, which permits the disclosure to be made in a way that does not hamper the display or enjoyment of the work. However, it is already generally evident to cinemagoers and videogame players that the content they are engaging with is fictional. Did the EU legislator intend disclosure—even in these obvious cases—albeit in a way that does not hamper the viewers’ enjoyment?

The dilemmas triggered by the AI Act’s labelling obligation are myriad. Consider, for instance, a film studio which would like to use AI to recreate a deceased actor’s likeness, employ synthetic voiceovers, or digitally rejuvenate ageing talent; and, similarly, videogame developers deploying generative AI for hyper-realistic avatars in sports or combat simulations.

The AI Act does not provide practical guidance on how such disclosures should be presented or what qualifies as evidently artistic use of deepfakes. This is why the future Code of Practice is so important for the sector. Under the AI Act, voluntary codes of practice are ‘central tools’ for compliance with the Regulation – providers may rely on them to demonstrate compliance before the regulator, that is the EU AI Office. In time, the EU Commission may adopt these codes as binding by-laws granting them general validity.

The new public consultation follows in the wake of the adoption of the first voluntary Code of Practice for General-Purpose AI model providers in the summer. Apart from deep fakes, other obligations include informing users when they are interacting with an AI system (e.g. conversing with a chatbot) and marking AI-generated synthetic content in a machine-readable and detectable format. However, the obligation to disclose deep fakes is likely to have the most far-reaching impact—affecting film, television, video games, and advertising alike. All these sectors should begin preparing.

The public consultation is a stock-taking exercise with high-level questions to gather evidence and examples of the use of deepfakes, and emerging labelling practices (visual tags, metadata, watermarks). Crucially, the public consultation, included a whole segment on the interaction with artistic use of AI and these obligations. For the audiovisual, gaming and advertisement sectors, clarity on these points will determine the practical burden of compliance and the design of future production workflows.

At Wiggin, we are closely following this process and advising clients across the media, entertainment, and technology sectors on the impact and application of the EU AI Act.