An increasingly popular question in attribution management and in advertising measurement is what effect ads – that were viewed but not clicked on – have. These ads are known in the lingo as ‘View-Thrus’ or ‘View-Throughs.’ The related question people usually have is what percentage of credit to give view-thrus for conversions that came via multiple touchpoints (e.g. the customer saw and/or clicked several ads, before buying).
I’ve come up with an idea for an experiment to measure the effect of such “view-thru” ads on conversion, as well as on acquisition costs. (An acquisition cost is the cost-per-action of acquiring the lead/customer.)
1. First, get a control group and an experimental group.
2. Second, instruct the groups to read and browse a few websites you control, and provide you with feedback on some [non-ad-related] issues with the sites.
The control group will view ads unrelated to what they’re about to search for. The experimental group will view ads by companies also advertising on the terms they’re about to look up.
Why are they browsing around and giving feedback?
The point of asking the groups to read and browse a few websites is twofold. One purpose – for the experimental group alone – is for them to ‘see’ the ads – defined here as causing an impression to be registered by loading the ad along with the rest of the page.
The second purpose – the request for feedback – is to mislead all the users about what the purpose of the experiment is. You don’t want to accidentally induce either group to to look more/less closely at the ads or otherwise behave strangely, but just browse around normally. Scientists have a name for this effect, but I forget what it’s called
At this point, the only difference between the groups should be the display ads they’ve viewed. So any differences between the groups subsequently should be attributable to the ads they’ve seen (even and especially if they haven’t clicked them).
Next, have the groups search for a few specific items and click whatever suits their fancy. The former searches are again to mislead the test subjects about the purpose of the experiment. The latter searches will be about products carried by companies whose display ads they just viewed.
Results and Interpretation
Q1: Is there a difference in conversion between the groups? That difference is attributable to the view-thrus. For attribution-management purposes, you can credit the display ads with the difference in percentage.
So if you convert the control group at 10% and the experimental group at 15%, then display gets 1/3 or 33% credit for the experimental group’s conversions.
(Not ½ or 50% because the denominator is 15, not 10. It’s 33% of 15%, or 1/3 of 15%. You already had 10% conversion rate (CR), so saying display deserves 50% credit would mean that display drove 7.5% conversion rate and the other marketing drove 7.5%. This is false since we know the other marketing, without display, converted at 10% on the control group.)
Note: This higher conversion rate again lowers your cost-per-action, reducing your costs for PPC.
Q2: Is there a difference in CTR between the groups? Supposing view-thrus raise CTR, the higher CTR should lift QS and lower CPC, effectively earning a lower cost-per-action.
So as a fun side-effect of such an experiment, you might find yourself with cost savings in your search budget from both the higher conversion rate and the lower CPC. (You might also find the opposite, such that display ads lowered your CTR and CR, in which case you need to ask whether the branding is worth it for the lower immediate cash flow.) But they came at the cost of a display ad campaign.
Next Steps After Attributing Credit
What should you do, then?
You need to see if the PPC savings with the experimental group were either equal to or greater than the display spend. If you saved on PPC more than you spent on display, you should obviously continue with display. If you saved an equal amount, you probably should still continue with display anyways because you also get a branding benefit.
If you saved less than you spent on display, it’s a question of degree. Are you willing to incur the additional expense for branding benefits? Is the branding a shield for shady SEO tactics? How much higher of a conversion rate did you get?Tags: advertising, Analytics