Thorn and Hive, who first partnered in April 2024, are expanding their efforts to combat child sexual abuse material (CSAM) with a new artificial intelligence (AI) tool. Announced yesterday, November 21, this new API integrates Thorn’s Safer technology with Hive’s content moderation solutions to make digital platforms safer for children.
Safer has flagged over six million CSAM files since its inception in 2019. This new tool expands its reach from known to new CSAM using hashing techniques and advanced machine learning.
AI-Powered Content Moderation
The Safer tool is designed to shield human moderators from harmful content while ensuring quick flagging and removal of abuse materials. It generates a risk score for flagged content, making it easier for human moderators to review the report and make informed decisions. Thorn’s AI model uses trusted data from the National Center for Missing and Exploited Children (NCMEC) to enhance accuracy and reduce false positives.
Kevin Guo, Hive’s CEO, said that many platforms, including social media, e-commerce, and dating apps, can benefit from this tool. Safer’s partnership criteria emphasize that CSAM is platform-agnostic and requires proactive, technology-led solutions. The identities of the initial partners of the companies remain undisclosed, but Thorn is willing to work with any platform.
“Protecting children in the digital age requires innovative solutions that evolve as quickly as the threats they face,” said Julie Cordua, CEO of Thorn. “Together, we’re making it possible for platforms of any size to implement robust protection measures and contribute to building a safer digital world for children.”
Challenges with AI-Generated Content
AI holds incredible promise for combating online threats to children, but technology poses risks. Earlier this year, the Internet Watch Foundation (IWF) reported an increase in AI-generated CSAM, with over 20,000 images found on a dark web forum in just one month. Most of the images displayed extreme abuse, an indication of how the technology is being misused to create realistic CSAM.
Thorn’s Vice President of Data Science, Rebecca Portnoff, recognizes this issue and suggests a holistic approach. Tools like Safer should be paired with efforts from AI companies to “prevent the creation of this material to begin with.”
Hive and Thorn are actively working to enhance Safer’s capabilities to detect AI-generated CSAM. “You can just imagine the intersection of AI-generated content and CSAM presents a whole new host of issues, and we really need to get ahead of it,” said Guo.
The companies are also developing a text classifier tool to “flag conversations that may indicate child exploitation.” This feature has been highly requested by platforms and is expected to play a key role in future efforts to combat online child abuse.
This partnership is an important step in tackling both current and emerging CSAM threats. Portnoff stated that the impact of AI solutions will grow as more platforms adopt them, allowing continuous refinement to improve their performance.
For digital platforms, adopting advanced AI tools is crucial for protecting the next generation. Thorn and Hive’s collaboration shows that while AI still has challenges to overcome, it is already making significant strides in defending children from harm.