Source: Ofcom published on this website Friday 11 July 2025 by Jill Powell
Deepfakes are AI-generated videos, images and audio content that are deliberately created to look real. They pose a significant threat to online safety, and we have seen them being used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians.
In July last year, Ofcom published their first Deepfake Defences paper, and today’s follow-up dives deeper into the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels, and context annotations. These four measures are designed to provide information about how AI-generated content has been created, and – in some cases – can indicate whether the content is accurate or misleading.
This comes as their new research reveals that 85% of adults support online platforms attaching AI labels to content, although only one in three (34%) have ever seen one.
Drawing on our new user research, interviews with experts, a literature review, and three technical evaluations of open-source watermarking tools, this latest discussion paper assesses the merits and limitations of these measures to identify deepfakes.
Our analysis reveals eight key takeaways which should guide industry, government and researchers:
- Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
- Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
- Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
- Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
- Attribution measures can be susceptible to removal and manipulation. Our technical tests show that watermarks can often be stripped from content following basic edits.
- Greater standardisation across individual attribution measures could boost the efficacy and take up of these measures.
- The pace of change means it would be unwise to make sweeping claims about attribution measures.
- Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.
The attribution measures explored in this paper are not new rules or expectations for tech firms, instead the findings can be used to guide those deploying these tools to help identify deepfake content. This research will also inform Ofcom policy development and supervision of regulated services under the Online Safety Act.