Instagram faces the prospect of an Ofcom investigation after being accused of turning a blind eye to predators advertising AI-generated child sexual abuse material.
A children's charity has complained to Ofcom after launching a legal challenge to social media giant Meta, which owns Instagram, Facebook and WhatsApp, for 'facilitating the sharing of and profiting from illegal child sexual abuse material'.
It follows an undercover police investigation which revealed that offenders are 'operating in plain sight' using Instagram as a gateway to promote their sites selling artificial intelligence-generated abuse, which is available just two clicks away from the app.
In some instances, paedophiles are gaining as many as 150,000 followers by using AI software to generate thousands of sexualised images of children which they brazenly market on Instagram, telling users to click on their sites to buy more explicit material.
Six months after the scandal was uncovered by the 5Rights Foundation, the charity campaigning for a safe digital environment for children has accused Meta of continuing to host the material, with its algorithms directing paedophiles to replacement accounts with the same images the moment one is shut down.
In the first test of the Online Safety Act which comes into force next year, Ofcom has been asked to step in to 'combat the ongoing harm caused by the prevalence of child sexual abuse material on Meta's platforms'.
From March the regulator will have the power to fine tech companies up to £18m or 10 per cent of their qualifying worldwide revenue if they do not tackle criminal activity.
The charity say Meta has failed to reform its ineffective monitoring processes to protect children, it has 'disregarded' police and legal requests and ignored its own in-app reports about harmful material.
A children's charity has complained to Ofcom after launching a legal challenge to social media giant Meta, which owns Instagram, Facebook and WhatsApp.
In some instances, paedophiles are gaining as many as 150,000 followers by using AI software to generate thousands of sexualised images of children which they brazenly market on Instagram.
In a letter to Ofcom and the Information Commissioner's Office from the charity's lawyers Schillings, it states: 'Meta is failing to take meaningful steps to suitably address the spread of illegal content on its platform.
'This is particularly egregious in the circumstances where Instagram is a platform that is so widely used by millions of children every day.
'The scale of victimisation, in terms of the number of children at risk of direct and indirect exploitation, is vast.'
Ofcom has been sent 12 undercover police reports from December 2023 to October 2024 after officers found dozens of accounts with names such as 'pervy kinks' featuring AI-generated sexualised images of young children.
Alongside the partially naked pictures were links to pay-per-view websites and Telegram channels featuring footage of real children being raped.
During the investigation, Instagram's algorithms recommended many similar accounts to officers, leading to fears that children stumbling on the sites may be directed into the clutches of child abuse gangs.
Some of the profiles had been brazenly marketing child pornography for over a year.
Far from being a victimless crime, AI technology has enabled offenders to manipulate photographs of real children in the most sickening way as innocent family pictures on social media can be scraped and transformed into child sexual abuse material.
In a letter to Ofcom and the Information Commissioner's Office, a charity said: 'This is particularly egregious in the circumstances where Instagram is a platform that is so widely used by millions of children every day'
Founder of the charity Baroness Beeban Kidron said: 'It is appalling that a company of Meta's size and resources continues to fail in its duty to protect children. AI-generated child sexual abuse material is readily available now through Instagram. Meta has the tools and means necessary to address this but has chosen not to act effectively.
'This negligence simply cannot be allowed to continue which is why we have asked the ICO and Ofcom to use their authority to force Meta to act. 5Rights will continue to campaign for better online safety for children.
'5Rights Foundation calls for swift regulatory action to ensure Meta upholds its responsibilities under the law, protecting children from exploitation and abuse in the digital space.'
Jenny Afia, Partner at Schillings LLP, added: 'Protecting children's safety and privacy is non-negotiable. Meta's continued failure to remove AI-generated child abuse material is indefensible. We demand Ofcom and the ICO step in to enforce the law and ensure Meta fulfils its obligations.'
Meta did not respond to requests for comment.