Mrs Lockhart said,
“This guidance amounts to little more than symbolic gestures, lacking the necessary enforcement mechanisms to drive real change.
While Ofcom acknowledges the escalating threat posed by online harassment, revenge porn, and AI-generated deepfakes to women and girls, its proposals rely entirely on voluntary compliance from social media giants that have consistently failed to act.
Instead of implementing robust regulatory penalties, Ofcom’s approach hinges on merely “naming and shaming” platforms that do not meet its standards, an inadequate response given the scale of the problem.
To illustrate, major social media companies generate substantial revenues and have extensive user bases in the UK:
• Meta Platforms Inc. (Facebook and Instagram): In 2023, Meta reported global revenues of approximately £130 billion. In the UK, Facebook alone had 37.1 million users as of January 2024.
• TikTok: Despite reporting a loss of $1.5 billion in its UK and European operations in 2023, TikTok's revenue increased by 75% to $4.6 billion, with a significant portion attributed to its UK user base.
• YouTube (Alphabet Inc.): Reached over 44 million UK adults in May 2024, making it the most popular social media platform in the country.
These figures underscore the vast influence and financial power these platforms wield. Big Tech has long demonstrated that it will not act decisively unless compelled to do so. The absence of meaningful enforcement mechanisms, such as legally binding obligations and financial penalties that hurt Big Tech, renders Ofcom’s recommendations hollow and without teeth.
The guidance fails to introduce substantial protections for victims. Women, girls, and other vulnerable users continue to face persistent harassment, exploitation, and harmful online behaviour with little recourse. Without clear accountability measures or consequences for non-compliant platforms, victims remain at the mercy of tech companies’ inconsistent and often ineffective policies.
Asking platforms to consider interventions like AI moderation, reporting tools, and content removal without any legal compulsion will not address the systemic failures that allow abuse to flourish.”