How to Do Ad Copy Testing

Ad copy testing is so easy, anyone can do it. But doing it right is important. So we wanted to put together this quick guide that you can use to begin conducting ad copy testing at your company.

First, let’s review why ad copy testing is so important:

Why Your Company Should Be Doing Ad Copy Testing

Ad copy, whether in search ads, Facebook ads, or any other digital advertisements, is the language you use to sell people on what you are offering. It is your opportunity to grab someone’s attention and convince them that they should click on your ad.

If your ad copy is not good – either because it’s not convincing or doesn’t give people a reason to click – your digital advertising campaigns are not going to be effective. What ad copy testing allows you to do is find out what language convinces the highest number of people to click through to your website. The more people you can get to click on your ads, the greater the opportunity to sell.

When done correctly, there is simply no better way to fine tune your messaging and improve your click-through rate (the percentage of people who click on your ads after seeing them).

Setting Up Your Ad Copy Tests

You may be testing with Google Ads or Facebook or any number of other digital ad platforms. So the actual set up will vary. But this guide is meant to be generally applicable to all platforms.

The first thing you need to do is a full audit of current ad copy and performance. Either using the platform or an external spreadsheet, identify all existing ads. Write the headline, the description, and the call to action. Next to each one, list the impressions, clicks, click-through rate, and conversions for the last 90 days.

You will use this same spreadsheet to track your testing over time.

What to Test

Once you have your existing ad copy performance, you need to identify those most in need of help. One easy way to start is to sort your ads by click-through rate, putting the lowest at the top. Those ads with the lowest click-through rate are the ones that could most benefit from being tested.

Another common starting point would be to list them in order of highest conversions. The ads that currently generate the highest conversions might be the ones that would add the most value if you improve the click-through rate.

Whichever method you choose, the next step is writing new ad copy. There is no wrong way to do this, but the key is to cast a wide net. Perhaps there are copywriters on your team whose job it is to write your ad copy. It is a good idea to get several different people to write ad copy suggestions for you, because different perspectives usually lead to new insights.

A good rule of thumb – one that both Google and Facebook recommend – is having at least three different ads running in each campaign at any given time. So you should aim to test at least two new versions of your ad copy against the existing one.

Try different headlines. Different ways of phrasing the same idea. Test pricing and discounts. Test a new call to action. A different benefit that isn’t featured in the current ad. Whatever you think will get more people to click.

How to Measure Ad Copy Test Results

Measuring the results of your ad copy tests is as important as what you test. This is how you will determine which ad copy works best and how to proceed.

Using the spreadsheet you created at the start, you can list each test out individually. Write out each version of the ad being tested along with the date that the test began. Update it on a regular basis (weekly?) with impressions, clicks, click-through rates, and conversions. (Ultimately ad copy testing is about getting higher click-through rates, but we track conversions where possible because we don’t want to do anything that is going to negatively impact that critical metric)

Give your tests enough time to ensure that the results of your test are statistically significant. This will vary depending on how many impressions and clicks your ads get. Here is a free online tool to measure statistical significance. Generally, once a result is 95% confidence or above, you have a completed test.

Again, the key is measuring which version of the ad achieves the highest click-through rate. Most platforms will allow you to spread the impressions evenly across all ads. This is the preferred method for ad copy testing, to ensure that each version gets a sufficient amount of impressions during the test.

Track and Refine

Each time you get a statistically significant test, it is time to take the next step. You want to pause those ads that did not win, ensuring the winning ad gets the widest possible audience going forward.

You may move right from one test to another, setting up two new ads to compete against the winner of the recently completed test. Or you may let the winner run awhile, during which time you move on to other campaigns to do testing there.

There is no rule about how many ad copy tests you have running at one time. Essentially, do as many as you can actively manage. The more you do, the more likely you are to find opportunity to improve your results.

By tracking performance over time and refining your ad copy with each round of testing, you are likely to discover new versions of your ads that bring a lot more prospective customers to your website.

Is There Such a Thing as Too Much Testing?

If you are a frequent reader of this blog, than you know that I am a big proponent of testing. I use the phrase ‘Always Be Testing’ more than I probably should.

Recently, a reader sent in a question that caught my eye. She asked, “Is there such a thing as too much testing?”

Though my immediate reaction was to say “no”, I had to admit that that would have been wrong. Of course there is such a thing as too much testing. Too much of anything is bad, right?

But instead of leave it there, I decided to dive deeper into the question. If there is such a thing as too much testing, how much is it? How do we know if we’re guilty of too much testing?

I started with the objective – why we test. We test things in order to improve performance. Therefore we prioritize those tests that are A) simple – meaning they require limited resources, and B) have the greatest potential impact. From there, we look at things that are not as simple but still big impact potential.

For most companies, that is a lot of testing. There are probably enough tests to run in those two categories that you will never run out of tests.

But let’s imagine you do. Next you start to look at things that are simple, but don’t have big impact potential. That is the point when you should ask yourself if it is worth testing. If the potential lift in performance from testing is not as much as some other activity, that test is not worth it.

So when you start to prioritize tests over other activities that have more potential to help the company, that is too much testing. As with anything else, you have to manage tests alongside all other uses of your time and your team’s time, and prioritize those things that have the greatest potential.

10 Email Test Ideas to Try Right Now

Email marketers don’t need me to tell them that they should be testing. Testing is an integral part of any successful email marketing strategy. Why?

  • There are so many things one can test in order to improve performance, and
  • The technology makes it very easy to create a split test and measure results

What if you agree that testing is important, but don’t know what to test? Then this post is for you.

Here are 10 A/B tests you can run in your email marketing campaigns today:

  1. Short vs. Long Subject Line
  2. Images vs. Text Only
  3. Text Links vs. Buttons
  4. Long Form Copy vs. Bulleted List
  5. Discounts as $ vs. %
  6. From Name = Company vs. Person’s Name
  7. Personalize the Subject Line vs. Not
  8. Show the Offer vs. Click to Find Ou
  9. Send in the Morning vs. Send at Night
  10. Descriptive Subject Line vs. Clever

There are 1,000s of different tests you can run. Hopefully this list gets the ideas flowing. The more you test, the more likely you will be to find a new formula that works better than what you are doing today.

Statistical Significance, Explained

What is statistical significance?

Wikipedia defines it this way. In statistical hypothesis testing, statistical significance is attained whenever the observed p-value of a test statistic is less than the significance level defined for the study.

In simpler terms, statistical significance is the point at which we can confidently conclude that the results of a test we are running are real, and not just a coincidence.

Why does statistical significance matter?

As marketers, we should love testing. We should test everything.

Some of the most common tests marketers do today include pricing, email subject lines, website (conversion rate optimization), and advertising copy.

Most tests are simple AB tests. We test one version directly against another, and we compare the results. But if you don’t measure for statistical significance, those results might lie to you.

For example, if you don’t have enough visitors to your website to achieve a statistically significant result, the “winner” of your test may be the winner for any number of reasons and not necessarily because of the changes you made between that version and the other.

When you do achieve a statistically significant result, you know with the utmost confidence that the changes you made directly resulted in the improvement or decrease in performance.

Here is a quick calculator for statistical significance that you can plug your test results into.

The Most Common Excuses for Not Testing

I talk to marketers all the time who tell me that they are hard at work trying to improve the quality of their marketing efforts. They’re working on strategy and pricing, lead generation and follows up. They want to improve the ROI of their campaigns and grow the business.

Then when I ask them if they’re testing, they tell me “not right now”.

It’s infuriating. As a marketer, you are never done testing. There are so many things to test.

Here are some of the most common reasons they give me for not running any tests:

1. “Not enough time/Too busy with other things.”

You do have enough time to test. I know that because you have enough time to do other things. Your goal is to improve the ROI on your marketing campaigns and the single best way to do that is by running constant tests. Tests help you find the winning combination of placements and offers along your conversion funnel. It should not be a side project or something on the back burner, it should be a core part of your process and planning.

2. “Don’t know what to test.”

This is a lazy excuse. There are an infinite amount of things that a marketer can test. Every word, every image, every ad, every email can and should ultimately be tested. I think the problem is you don’t give yourself time to step back and think creatively about what you are doing. Book an hour or two with your team and brainstorm/list every possible test you can run. Then prioritize them by how hard they are to set up and what impact you might have.

3. “Don’t have the data to measure the results.”

This is one of the more legitimate excuses I’ve heard. But it should not stop one from testing. If you don’t have the data, make a plan to get it. Work with IT or finance or sales to figure out what you need to measure, and how you can work with the information you have in order to determine what is possible. Find technologies and platforms that are built for testing and will get the information and present it to you. If you have something worth testing, data should never hold you back.

4. “Testing has not worked in the past.”

Some people tell me that they have stopped running tests because they have not worked. Either there has been no clear winner, the control version has always won, or the results could not be trusted because of some external influencing factors. But I always say, just because a test didn’t yield the result you wanted, doesn’t mean it didn’t work. Every test you run with no clear winner, or where the new version fails to improve performance, tells you something about your business. A losing test is just as important as a winning test for determining how to proceed and how to get better.

Conclusion: there’s no excuse for not testing. It’s too important.