Apps  Contact  Seminars 

Archive for November 5th, 2010


November 5th, 2010

In Tech Industry, late is the norm

by Amrinder Arora

Rum Raisin

Tags:


November 5th, 2010

Applying Wilson Score to Ad Variations in Google AdWords

by Amrinder Arora

In Google AdWords (and other ads programs), you can create multiple ad variations, and then Google, in its infinite wisdom, will show the “better performing” one.  You can also use this to conduct simple A/B experiments and compare different versions of landing pages, etc.  However, every time you “tweak” an ad or a landing page, it is considered a new version, so its statistics get reset.  If you create a new ad or landing page, of course, you also start with blank slate of statistics.  The question then becomes: How can you use the low volume of impressions and clicks for various ad variations/landing pages to estimate their performance?

Let us take an example:

Ad 1: 1900 impressions, 8 clicks (0.42% CTR)

Ad 2: 285 impressions, 1 click (0.35% CTR)

Obviously, at this point, Ad 1 seems to be dominating Ad 2, and chances are Google will continue to show that ad more.  However, comparing CTRs at low volumes is problematic due to statistical significance of the data involved (for example, one extra click on Ad 2 would propel its CTR to higher than that of Ad 1).

I don’t know how Google handles it, but one reasonable way to handle this would be by using Wilson score, which in this context can be defined as:

Wilson Score for Ad Variations

where:

  • CTR’ is the Wilson score based CTR,
  • B is the base CTR rate that we would assume about an ad without showing that ad even once (kind of the default rate),
  • z^2 is a measure of how confident we want to be, [For 90% confidence, z^2 is approximately 2.6.] and
  • N is the number of impressions of the ad.

Finding the base CTR rate

This aspect is a bit more interesting.  If we consider all online ads, then perhaps B = 0.1 may be a reasonable number (in either case Google knows this number).  However, the average click through rate is highly dependent on the category of ads.  So it is much more practical to simply take the sum of clicks and divide by sum of impressions. (Although, do note a clear implication of doing this – the CTR’ of your ad with highest CTR will by definition be pushed down, and the CTR’ of your ad with lowest CTR will by definition be pushed up.)

Some Results

Using B=0.1 (10% CTR being the norm), and z^2 = 2.6, and using the Wilson score based CTR formula above, we obtain:

Ad 1: 1900 impressions, 8 clicks (0.42% CTR),  CTR’ = 0.43%

Ad 2: 285 impressions, 1 click (0.35% CTR), CTR’ = 0.44%

So, in this particular case, it does appear that Ad 2 is actually the better performing ad than Ad 1, or at least to say that there is no statistically significant difference between performance of Ad 1 and Ad 2.

However, if we use B as the average value of CTR for these two ads, then the difference between CTR and CTR’ vanishes almost entirely.

Using B=0.004119 (total number of clicks / total impressions for these two ads), and z^2 = 2.6, and using the Wilson score based CTR formula above, we obtain:

Ad 1: 1900 impressions, 8 clicks (0.42% CTR),  CTR’ = 0.42%

Ad 2: 285 impressions, 1 click (0.35% CTR), CTR’ = 0.35%

Further Observations

  1. If B value is used that is the average click through rate, then difference between CTR and CTR’  for most ads is very minor.
  2. Also, it is easy to see that after a few thousand impressions, CTR’ and CTR are virtually the same (as they should be).  So, this calculation is of interest only in the beginning of an ad launch, or after an ad has been modified.

So, after all this, we reach the perhaps anti-climactic conclusion that applying Wilson score to ad variations in Google AdWords may be unnecessary, and CTR may be as good a measure as any of an ad performance.  Null hypothesis holds.

Tags:


Switch to our mobile site