The Internet Watch Foundation (IWF), which finds and helps remove abuse imagery online, said 291,273 reports of child sexual abuse imagery were reported in 2024.
In its annual report, the organisation said it was seeing rising numbers of cases being driven by threats, including AI-generated sexual abuse content, sextortion and the malicious sharing of nude or sexual imagery.
It said under-18s were now facing a crisis of sexual exploitation and risk online.
In response, the IWF announced it was making a new safety tool available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms.
The tool, known as Image Intercept, can spot and block images in the IWF’s database of more than 2.8 million which have been digitally marked as criminal imagery.
The IWF said it will give wide swathes of the internet new, 24-hour protection, and help smaller firms comply with the Online Safety Act.
The Online Safety Act began coming into effect last month, and requires platforms to follow new codes of practice, set by the regulator Ofcom, in order to keep users safe online.
Derek Ray-Hill, interim chief executive at the IWF, said: “Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire.
“New threats like AI and sexually coerced extortion are only making things more dangerous.
“Many well intentioned and responsible platforms do not have the resources to protect their sites against people who deliberately upload child sexual abuse material.
“That is why we h...
[Short citation of 8% of the original article]