At DoubleVerify (DV), we are unwavering in our commitment to keeping digital advertising free from harmful and illegal content. The presence of child sexual abuse material (CSAM) or other forms of criminal content online is abhorrent, and even one impression running alongside this material is unacceptable. We all can be and should be, part of the solution.

We share the collective outrage and disgust that this content exists on the internet. Helping to protect the integrity of the digital ecosystem in that context is a responsibility we take seriously. To that end, we have mobilized with maximum urgency and intensity, to fortify our customer tools, collaborate with industry and law enforcement partners, and implement new dynamic solutions that provide advertisers with stronger, more proactive safeguards. Furthermore, we will endeavor to make these available to all industry partners, whether or not they are DV customers. We know that we can do more together.

This is an ongoing investment by DV to ensure we and the industry never play a role in supporting this content.

Our immediate actions include the following:

New Industry Safeguards for Advertisers

1. CSAM & High Risk Illicit Content Avoidance Category 

On top of our existing brand safety protection tools, today DV launched aHighly Illicit: Do Not Monetize” Content Category — a new industry-wide safeguard designed to help open web advertisers avoid domains flagged by trusted third-party experts, including the National Center for Missing & Exploited Children (NCMEC), as at risk of containing or facilitating the distribution of CSAM or other forms of illegal content. This additional measure will further enhance our existing tools that protect customers from high risk content today. We are also making this category available to any platform, whether they are a DV partner or not.

To develop this category:

  • DV reviewed three years’ worth of publicly available NCMEC data as a starting point, analyzing over 300 domains and electronic service providers, ad-supported and non-ad-supported, that had received CSAM-related notices.
  • We cross-referenced these lists with DV’s existing brand safety classifications, finding that roughly 100 of these domains were already covered by our classifications, including those within our brand safety floor.
  • For the remaining sites, our classification team conducted a detailed site-by-site analysis to determine risk levels, excluding large, well-moderated social media and search platforms that appear on the list.
  • DV’s “Highly Illicit: Do Not Monetize” Content Category — automatically provided for all DV advertisers and distributed to over 100 platform partners — includes dozens of illicit and high-risk sites, will continue to evolve and be updated on an ongoing basis, and will include additional inputs from law enforcement agencies and expert third-parties, to drive the best protection.
  • This category will be enabled by default.

This category is living and dynamic, meaning new sites will be added based on trusted third-party insights, industry collaboration and law enforcement guidance. DV is also actively working to make this list freely accessible across the industry for integration into other platforms’ reporting and brand safety tools. For more information or to discuss how this category can be enabled on your platform, please contact us here.

2. P2P Sharing and Streaming Avoidance Category

Peer-to-peer (P2P) sharing and streaming domains and apps enable the distribution of digital media such as software, videos, music, and images. These platforms, though useful for distributing digital content, can be exploited or abused to host illegal content, violating site policies and exposing advertisers to risks.

To better protect advertisers, DV has launched a dedicated “P2P Sharing and Streaming” avoidance category, enabling brands to block ads from appearing on P2P sharing and streaming sites and apps that could be misused to host illegal or exploitative content. DV’s new “P2P Sharing and Streaming” content category is designed to help advertisers navigate these potentially high-risk environments more effectively. This new release enables the implementation of pre-bid protection and post-bid monitoring across open web and mobile app inventory — allowing advertisers to block and measure ad delivery on P2P sharing and streaming platforms. This category will be enabled by default.

3. Ongoing Collaboration with Law Enforcement and Child Safety Organizations

While DV does not specialize in CSAM detection, we recognize our role in helping prevent ad-supported monetization of illegal content. As part of our commitment:

  • We have formally reached out to the FBI to offer our full assistance on this issue and future related investigations.
  • We are engaging with additional child safety organizations to support the ongoing identification and classification of high-risk domains.
  • We will release research on the ad-supported monetization of high-risk or illegal content, highlighting potential new threats to keep the industry, third-party experts, and law enforcement informed.

Any new sites identified that are not already covered immediately will be added, either to our “Highly Illicit: Do Not Monetize” category or our “P2P Sharing and Streaming” category.

Looking Ahead

Our work is not done. The fight against illegal and harmful content online requires ongoing, collective action. We encourage all advertisers to implement the “P2P sharing and streaming” category, and the “Highly Illicit: Do Not Monetize” category, into their media buys.

By working together — across advertisers, ad tech platforms, publishers, law enforcement, and third-party experts — we can make a meaningful impact in keeping digital advertising safe, responsible, and free from the monetization of harmful content. It is a constantly evolving challenge, but we understand the importance of our responsibility and the trust the marketplace has placed in us.

For more updates on the steps we’ve taken and will continue to take, please see posts to DV’s Transparency Center.