At DoubleVerify (DV), we maintain the highest standards of accuracy in content classification. A new report by Adalytics has misrepresented our classification and demands correction.
No Full Insight
This latest report is part of a concerning trend where third-party research lacks the information, knowledge and understanding necessary to evaluate the nuances of media verification. We have previously noted that third-party firms do not have full insight into what actually gets flagged, why certain media is flagged, or the scale of any incidents with our customers. With that in mind, reports like this are misleading as they lack critical details about the featured advertisers’ campaign strategy and setup.
For example, depending on the brand and strategy, advertisers can choose to feature publishers, like Fandom, in campaigns regardless of how their content is classified. A publisher on an exceptions list would supersede content avoidance. These decisions are made by the agency and brand. Campaigns can also selectively minimize certain content classifications, criteria or controls in one-off scenarios. This report provides none of this detail because it does not have access to any of this information.
Pre-Bid Avoidance and Post-Bid
Generally, most of our advertisers use pre-bid avoidance across the majority of their campaigns, but not all. Even content identified as negative may not be filtered out pre-bid for various reasons. Some advertisers avoid or block negative content, while others choose to monitor incident rates post-bid and react depending on the severity. For example, one of the advertisers — among others inaccurately cited in the report — chose a monitor-only approach for the majority of its campaigns, opting not to block content. This is yet another instance of intentionally misleading information as the third party lacks full knowledge of the specific pre- or post-bid settings for each of our customers.
Errors Differentiating DV Code
Another significant error in the report is its failure to recognize the difference between DV code appearing in an ad call, which could be related to our publisher services or our advertiser services. DV has confirmed that all of the tags observed in the screenshots provided to us are associated with our publisher services and have nothing to do with the advertiser. The third party conducting the analysis does not understand this distinction or is intentionally ignoring it and never reached out to DV for clarification. As a result, a number of examples of ads correlating to DV code are simply incorrect.
Manufactured Results
This speaks to a broader issue: the results in this report are entirely manufactured, from the omission of client campaign setup information to the methodology itself, where the researcher arbitrarily searched for racist terms. The outcomes are manipulated and lack any organic validity or scale.
To be clear, no customer has expressed concerns about the accuracy of our content categories. DV’s own preliminary analysis confirms that all content shared with us was classified accurately for customers and partners.
To that end, this report is yet another example of flawed and misleading analysis, engineered to achieve a specific outcome by a non-accredited vendor. Their research has been repeatedly debunked, and they are currently facing litigation for inaccuracies in their findings.
Here are some examples of recent inaccurate research by this vendor:
MFA Report: An analysis of 21 MFA websites suggested that verification companies, including DV, allowed ads on these platforms. Contrary to these claims, DV had already classified virtually all these sites as MFA.
Forbes Report: This report falsely claimed DV did not classify a Forbes subdomain as MFA. In reality, DV had already identified this inventory as low-tier MFA before the research was publicized.
Our Commitment to Transparency and Protection
DV continues to champion the accurate classification of content and the provision of robust protection and transparency to both advertisers and publishers. As the digital landscape evolves, so too does our commitment to setting industry benchmarks for verification and classification standards.
We welcome collaboration with industry leaders to help elevate the quality of all of our solutions. What we are not aligned with, however, are misrepresentations of selective data points, provided with no accredited process, introduced as definitive facts in order to promote another vendor’s business.
This piece is part of DoubleVerify’s newly launched Transparency Center, a dedicated portal designed to educate the industry about DV technology and measurement. By providing detailed explanations, insights and timely statements on key issues, we aim to foster trust and transparency within the digital advertising ecosystem.