In Part 1 of the AdWords Experiments Edition we discussed how to define and set up a Google AdWords Experiment while in this part we will be discussing how to commit to a decision based on the data you have.
Making the Call
At this point you’ve set up your experiment and it should have run for long enough to collect sufficient data. Now you want to analyse the information and make a call.
To begin you’ll want to go to the appropriate view in your campaign whether that’s keyword, placement, ad group level etc. and click segment by experiment (See the 1st screen shot under Bid Set Up in Part 1 if you forgot how to find this). If you would like to see the complete picture, without the granular bid by bid information, you can view the experiment details at the campaign level regardless of what level of bid changes you’ve done.
To recap the test in the screen shot below: we wanted to see what effect a bid boost would have on all metrics but specifically CTR, CPA and sales (bid changes in this case were done at the keyword level).
What you need to look at is how the experiment row compares to the control row comparing the metric columns that are essential as you previously defined when setting up your experiment. In the case of this test the primary columns I am looking at are CTR, conversions, CPA, conversion rate but will also look at average CPC and average position as secondary deciding factors.
The data of this test is rather limited because of the nature of this account however you can see that CTR, average position, conversions, conversion rate and CPA have all improved while the average CPC has increased. In the case of this test the CPC increase was the basis and an increase for that is negligible as long as ROI remains the same or improves. Therefore this test has been a success as we’ve managed to improve all of the above mentioned metrics.
When looking at your own data be sure to look for the blue arrows pointing up or down beside your metrics to indicate how significant the change has been and which way it has gone. What you would ideally like to see is three blue arrows up or down beside your key metrics indicating that you’ve got enough data. This means the difference in metrics are reliable enough to make a decision. The fewer arrows you have the less reliable the data is and if you just have a set of grey arrows (one pointing up and one down) the data is not statistically reliable. You can see the three blue arrows and double grey arrows in the above screen shot. Be sure to compare the metrics that are pertinent to your test and not be scared by less important things such as the higher CPC in my case. Also be sure to collect enough data to make the call (preferably three blue arrows up or down) specifically on drastic changes (the above screenshot is for illustrative purposes and we recommend a large enough data sample before making a definitive call on an experiment).
If you have statistically reliable data you can move on to the decision process, you do not have enough data you can let the experiment continue to gather enough data for future analysis.
Once you’ve decided whether the experiment is a success or failure you may implement or delete the experimental changes in the settings tab where you originally set up the test.
Experiment goals and data analysis will vary depending on the needs of any specific account therefore this experiment should be not be taken as the one and only way to run an experiment – so get creative!
Once again, if you’ve had your own successes or failures with this feature feel free to share them, we’d love to hear about it!