Ads and Stats
Let’s suppose we have two ads running on the same ad network.
- Ad 1 has been shown 59,000 times, and has 37 clicks.
- Ad 2 has been shown 51,000 times, and has 39 clicks.
So, it looks like Ad #2 is a more effective. Its click-through rate is .077% vs .062%. It’s gotten two more clicks, even though it’s had 8000 fewer impressions. But how sure are we that it’s not a random fluctuation, and that both ads are equally effective?
What I think: If we scale these both to 50,000 impressions, we have about 31 vs 38 clicks. The standard deviation of a Poisson distribution with expected value of λ is √ λ. So random fluctuations might easily account for a difference of √36=6 click. We might even see a difference in 12 clicks: for comparison, I think the Giants winning the 2011 World Series was about as likely as seeing a difference of 12 clicks if the ads were actually equally effective.
Rule of thumb: Show two ads until each have 25 clicks, or conversions, or whatever metric you’re using. If one ad is clearly superior, it will have 10 more clicks that its rival. Or go to 100 clicks; if one ad is clearly better, it will beat the other by 20 clicks.
Am I right? The Poisson distribution is a long ways in my rear view mirror. I’m surprised this isn’t discussed all the time, since (a) you’d think everyone who buys ads from Google would need to know this, and (b) my impression is that the Poisson distribution is even farther in the rear view mirror of the typical advertising agency.
Let me know if you’ve got a better way. Email me.