Below is a response to inaccurate claims raised in an article published by The Wall Street Journal on March 28, titled “Efforts to Weed Out Fake Users for Online Advertisers Fall Short.”
Helpful Background:
Adalytics’ latest “report” and The Wall Street Journal’s coverage of it are inaccurate and misleading. Both Adalytics and The WSJ are aware that the premise and conclusions of the report are unsupported by the facts. As we’ve seen in prior cases, Adalytics manipulates data and uses these reports as marketing tools to sell its own services—at the expense of transparency, accuracy, and trust.
Here are the facts: the report wrongly suggests that DV failed to detect general invalid traffic (GIVT) and more evasive bots, like URLScan. In every example shared with us prior to publication, including The Guardian example featured in their article, DV had correctly identified the bot traffic. When that occurs, the impressions are removed from billable counts reported to DV’s advertiser customers, as per industry standards. The WSJ was informed of this before publication and provided evidence of numerous inaccuracies. Notably, none of this information is in The WSJ article, as its inclusion would have undermined the false claims being made.
Upon review of the article, The WSJ does not cite a single example of a GIVT impression that DV actually failed to detect or report to the advertiser. Meanwhile, an independent fraud expert has reviewed The WSJ’s information and has said it “really misses the mark on a few key and obvious points” and is based on “a manipulated test.”
While DV was never provided with the full report by Adalytics prior to WSJ publishing the story, DV was first presented with details of the report by The WSJ in January. At that time, we outlined flaws and inaccuracies, which we shared transparently in a statement and blog post on GIVT. Since then, although Adalytics never provided DV with direct access to their report, we received additional information regarding their claims, which continue to be based on misleading and misunderstood data. Despite our clarifications, The WSJ and Adalytics have chosen to misrepresent the facts while demonstrating a fundamental misunderstanding of invalid traffic (IVT), our technology, and the broader digital ad ecosystem.
As an example, much of the research cited in The WSJ’s story relies on the incorrect assumption that URLScan—a security analysis bot methodologically prioritized in the story simply because Adalytics uses it—is a declared bot. While URLScan is “benign,” it is not self-declared by default. As URLScan’s CEO Johannes Gilger clarified via email—and it appears The WSJ did not contact him for comment—“Our scanner does not announce itself, that would defeat the purpose of the tool. Instead, it will look like a regular web-browser.”
As a result, DV might detect, identify, and manage URLScan in a variety of ways. You can see more information here. However, regardless of the method or process, DV accurately detects URLScan pre-bid (if permitted) and post-bid (if not permitted pre-bid)—in which case the impressions are removed from billable counts reported to DV customers.
Consistent with its prior reports, Adalytics’ methods and conclusions are fundamentally flawed and built upon a problematic and biased methodology. Their analysis demonstrates a serious misunderstanding of the ecosystem, and is solely intended to promote Adalytics’ own services, which lack any industry-supported accuracy testing or certifications, and any meaningful scale.
Claims & Corrections:
CLAIM #1:
“A new report from Adalytics, a firm that helps brands analyze where their ads appear, says the top three companies that advertisers pay to detect and filter out bots—DoubleVerify, Integral Ad Science and Human Security—regularly miss nonhuman traffic.”
CORRECTION:
It’s important to distinguish between filtration, used here to refer to pre-bid blocking, and detection, which includes both pre-bid blocking or post-bid identification. For GIVT, if traffic isn’t blocked pre-bid for a technical reason (DSP limitation, no pre-bid integration, the specific type of bot traffic — see here), it’s caught post-bid and removed from billable impressions, as per industry standards. This data is fully available to advertisers in DV’s GIVT Disclosures Reports. Most importantly, advertisers would not pay for this traffic.
When pre-bid avoidance is enabled, GIVT from known (self-declared) bots appears on less than 0.03% of programmatic impressions. Even then, DV identifies these bots post-bid in virtually all cases. This represents a 75% to 98% reduction—depending on client settings—compared with unfiltered programmatic impressions, which proves that GIVT pre-bid avoidance is highly effective, especially at reducing post-bid reconciliation. Adalytics is attempting to cast doubt on pre-bid avoidance because they do not offer this technology at the scale or depth that DV does.
Ultimately, ad fraud is a $70 billion issue in the U.S. alone. DV’s solutions are extremely effective, saving clients millions of dollars annually in wasted ad spend. The critical point is not whether one ad is served to an unidentified bot. It is that through our unique combination of pre- and post-bid technology, we prevent billions of impressions from reaching fraudulent inventory. This protects advertiser budgets while ensuring legitimate publishers benefit from that spend.
CLAIM #2:
“The report, shared exclusively with The Wall Street Journal, found tens of millions of instances over seven years in which ads for brands including Hershey’s, Tyson Foods, T-Mobile, Diageo, the U.S. Postal Service and the Journal were served to bots across thousands of websites. This occurred even in cases when bots identified themselves as such, because they were used for benign purposes like archiving websites and detecting security threats.”
CORRECTION:
Ads are served to known, self-declared GIVT bots daily for many reasons. Whether filtered pre-bid or detected post-bid, this traffic is removed from billable impressions for DV clients.
For example, based on conversations with The WSJ, we know that Adalytics cites two DV client impressions in their report flagged as bot traffic. In both cases, DV confirmed—off the record with WSJ—that the impressions were correctly identified and removed from our billable impression count.
CLAIM #3:
“It’s like, can you tell the difference between a person and a person-shaped sock puppet that is holding up a sign saying, ‘I am a sock puppet’?” said Laura Edelson, a computer science professor at Northeastern University and former Justice Department technologist who reviewed the Adalytics report at the Journal’s request.
CORRECTION:
Self-declared bots are sometimes served ads, but those impressions are not billable. DV detects and removes this traffic from billable impressions post-bid.
It’s also important to note that many of the impressions that informed this report appear to involve URLScan. URLScan is not typically a self-declared bot. In fact, its default setting is to not declare itself.
According to URLScan’s own CEO, Johannes Gilger, who DV contacted via a representative, “Our scanner does not announce itself, that would defeat the purpose of the tool. Instead, it will look like a regular web-browser.”
In the context of ad fraud and invalid traffic detection, self-declared bots identify themselves through clear signals, allowing systems to easily detect and filter them as GIVT.
URLScan, however, is a web analysis tool that uses headless browser technology and does not declare itself as a bot. It is designed to mimic real user behavior for website scanning and security testing purposes. Because it doesn’t self-identify—and often runs scripts that resemble legitimate user activity, such as scrolling and interaction—it can evade basic bot detection filters and is often classified as SIVT when detected by fraud prevention systems.
Still, DV detects and avoids URLScan in most instances pre-bid, based on a combination of signals—including IP address and user agent data previously associated with SIVT. When not detected pre-bid, we identify and detect URLScan post-bid. Either way, nothing was “missed.” We accurately identified every impression The WSJ shared with us prior to publication, and those impressions were not included in advertisers’ billable counts.
CLAIM #4:
“The biggest giveaways that a web visitor might be a bot are found in the internet-protocol address and what programmers call the “user-agent”—credentials that appear when a website is visited, including the user’s browser and device type, and whether the user announces itself as a bot.
Yet some of the leading software that is supposed to help companies filter out bots before buying ads have major blind spots. Services from DoubleVerify and Integral Ad Science that try to catch bots before an advertiser bids don’t receive those credentials from some of the largest ad-buying platforms, according to people familiar with the matter and a Wall Street Journal analysis of the platforms’ applied programming interfaces for developers.
That means they don’t always have the information needed to determine whether the user is a self-declared bot or if it is on industry-standard lists of known bots…
Executives at multiple ad agencies, publishers and brands told the Journal they were under the impression that DoubleVerify and Integral Ad Science could access the user-agent and IP addresses and relied on that information to filter out bots before bidding.”
CORRECTION:
This is inaccurate regarding DV. Not all DSPs enable user-agent lookups pre-bid, which DV has shared publicly. (DV strongly encourages all DSPs to support this functionality).
Regardless, these bots are still detected post-bid. DV removes those impressions from billable counts. Lack of pre-bid filtration does not mean missed detection.
Contrary to WSJ’s claim, DV does have access to the relevant data post-bid. In fact, DV combines IP and user-agent signals to detect and manage countless SIVT falsification schemes, as well as accurately detect more basic GIVT signals. See screenshots documenting IP and user-agent detection, focusing on GIVT IPs and GIVT User Agents, as an example. Clearly, DV has access to this data and the claim is false.
Note that DV leverages hundreds of signals and data points to detect IVT, across multiple layers of protection. Detection post-bid enables advertisers to not pay for invalid traffic, even if pre-bid filtering isn’t available. (Additionally, these signals are used together, so user-agent bots can be filtered pre-bid if they are also associated with an evasive or problematic IP address.)
It’s unfortunate WSJ parroted false claims by Adalytics to advertisers, presenting them with inaccurate or misleading information. To be clear, an ad shown to GIVT traffic does not mean an advertiser paid for that traffic. This is why post-bid fraud detection exists.
CLAIM #5:
Kenvue, Tylenol’s parent company, said it works with an outside ad verification vendor but didn’t respond when asked to identify the company. IBM declined to comment. The Guardian said it takes steps to avoid having advertisements served to bots, and would make it up to brands if a significant portion of traffic for a campaign was misdirected.”
CORRECTION:
In every example shared with us prior to publication, including The Guardian example featured in their article, DV had correctly identified the bot traffic. When that occurs, the impressions are removed from billable counts reported to DV’s advertiser customers, as per industry standards.
We shared this with The WSJ off-the-record, however, they have chosen to imply that we “missed it.”
CLAIM #6:
“One publisher that pays DoubleVerify to identify bots on its websites tested how well those efforts were going in November and came away disappointed. The publisher cross-referenced its reports from DoubleVerify with data on bot visits from security-scanning company URLScan.
DoubleVerify missed 21% of the documented bot visits and allowed ads to be served to them, according to the publisher’s analysis and data reviewed by the Journal. In some cases, DoubleVerify’s software identified a bot but still let a brand buy an ad for that audience, the analysis showed.
DoubleVerify declined to comment on the publisher’s study.”
CORRECTION:
To be clear, DV did not receive any substantive detail on this study outside of an email from The WSJ received on February 7. In that email, these are the only details provided regarding the publisher’s methodology: “The publisher wrote code to generate copies of the bots used by web-analysis company URLScan” and considered these bots “easy to catch because they use what the industry calls a ‘headless browser.’” Additionally, “The publisher wrote code to have these bots visit its own websites thousands of times” to assess how DV classified them.
Given its weight in the story, we ask that The WSJ release the study for review by bot and fraud experts. In the meantime, note the facts:
- The publisher did not declare this traffic as a test. GIVT detection depends on bots self-declaring as non-human (per industry standards). By omitting this, the test was designed to evade detection, making it a poor proxy for GIVT and more aligned with SIVT.
- Relatedly, URLScan uses headless browser technology to mimic human behavior—clicks, scrolling—making it more evasive. The way this test was constructed pushed the traffic into SIVT territory, not GIVT.
- Even under these manipulated conditions—and without any visibility into critical factors like which media-buying platform was used, which impacts pre-bid avoidance—DV still detected and classified the bot traffic 80% of the time, per WSJ. That result demonstrates the strength of DV’s fraud detection systems. (Of the remaining percentage, it is not clarified by The WSJ if DV identified the bots post-bid, which would mean they were removed from our clients’ billable impressions.)
We also consulted independent ad fraud experts to validate this assessment:
- Shailin Dhar, for example, noted that “URLScan is not using a declared bot. It solves captchas,” and is wrong to “suggest that headless browsers are not designed to emulate human appearance.”
- Meanwhile, Antoine Vastel says, “We should consider it a very good result to see 80% coverage when avoiding evasive IVT, as described by The WSJ. Otherwise, we risk hurting scale due to over-avoidance of genuine traffic” for the publisher.
Finally, a DV representative reached out to URLScan and posed the following:
“Do you see [URLScan] as something that should be self-declared when interacting with websites or ad systems? Is URLScan ‘easy-to-catch’ or is it more complex?”
URLScan’s CEO, Johannes Gilger, responded, “Our scanner does not announce itself, that would defeat the purpose of the tool. Instead it will look like a regular web-browser.”
Note that we have no visibility into which DSP was used for this test, what settings were applied, or other key variables typically disclosed in a controlled test. While independent validation of our technology is valuable, the intentionally obscured and manipulated methodology used here is fundamentally different from standard GIVT behavior and raises concerns.
Additionally, according to Dhar, “Did this publisher also track which advertisers it sold invalid ads to? Will they give that money back?”
DV is working to identify the publisher to ensure advertisers and their DSPs are aware of these undeclared tests and can seek reimbursement if necessary. However, we have no visibility into how the publisher handled billable impressions in this case. Typically, DV removes GIVT bot traffic from advertiser impressions—but since this was a publisher client, it was the publisher’s responsibility to manage reconciliation.
CLAIM #7:
“Fighting fraud by bad actors—which don’t flag themselves as bots—is even harder. Some scammers create bots to visit news sites, building up a browsing history that makes them look human to ad auctions. Then the bots visit websites the scammers own themselves, where advertisers bid on them because they look like promising shoppers. The scammers pocket the ad revenue.
Brands are supposed to be reimbursed if, after the fact, a verification company finds that a digital ad campaign had a significant audience of bots. Major ad-buying platforms that facilitate digital-ad auctions say they have refund processes for this situation. Yet ad buyers told the Journal they rarely seek refunds, since verification vendors report such low bot rates.”
CORRECTION:
When pre-bid avoidance is enabled, GIVT from known (self-declared) bots appears on less than 0.03% of programmatic impressions. Even then, DV identifies these bots post-bid in virtually all cases. This represents a 75% to 98% reduction—depending on client settings—compared to unfiltered programmatic impressions, which proves that GIVT pre-bid avoidance is highly effective, especially at reducing post-bid reconciliation.
For GIVT traffic managed post-bid, DV removes invalid impressions from billable totals before they reach the advertiser. The advertiser then reconciles this list of impressions with their DSP or media-buying platform of choice. Note that GIVT disclosure reporting is available to any advertiser through our platform, DV Pinnacle.
Unfortunately, The WSJ, through Adalytics, presented this advertiser with inaccurate information. We are happy to walk through the Adalytics report’s flaws directly with the client and clarify how DV’s pre-bid and post-bid systems work.