Don’t Judge a Landing Page by its Source

Dont1.jpg

Landing pages are some of the most important pages on our website. They are the pages that people will see when they first click through on an ad, and give us a chance to make the kind of first impression that piques their interest and gets them moving through the sales funnel.

And because landing pages have such an outsized influence on the marketing process, these are the pages we are most likely to test. We want to know which pages lead to higher conversion rates and lower acquisition costs.

But a word of warning for all the would-be optimizers out there. Do not judge a landing page by its source.

Let me explain.

While it can be tempting to open up your Google Analytics account and jump down the Landing Pages to compare the metrics of one page against another, here is why that is not a good idea.

To truly test whether one landing page works better than another, we need to make sure that all other factors are equal. There are a lot of things that might influence the performance of a landing page, such as whether visitors are seeing it on mobile or desktop, the action we are asking them to take when they land on the page, what the ad said that brought them there, and of course, the source.

Visitors from Facebook might perform different than visitors from Google, from Bing, from Twitter, from Yahoo, etc.

So in order to truly test landing page performance, we must ensure that the types of visitors getting to the pages are the same. How do we do that? We test different pages in the same ad groups.

In Adwords, we duplicate ads and change the landing page. In Facebook, we create two ads in the same ad set and send them too two different landing pages. For our banners, we create campaigns that use the same ads but send traffic to two different pages.

Instead of using Google Analytics to determine what pages are working best, we use the platforms themselves to split the traffic between multiple pages and report back on performance.

Data and analytics are great for marketers. But unless you are wary of all the ways data can deceive you, you risk making poor decisions with it.

How to Judge the Results of a Price Test

dollar signs.jpg

So you are running a price test? But how do you determine which price wins?

It may seem like a dumb question, but it’s not. Sometimes the obvious answer is not the right one.

First, let’s establish the goal of the price test. In most cases, your goal will be to sell more of Product X. But you don’t care about just the raw volume of products sold. You care about revenue. And you care about profitability.

However, some companies use products as loss leaders, offering low prices to get customers in the door in hopes that they spend more money down the line. In that case, you might look at raw customers or sales to judge a winner.

And when launching a new product, your goal might be to drive as much revenue as possible, without caring as much about profitability. That would also change the metrics you use to judge the results of your price test.

But assuming your goal is profitability, you are going to measuring total contribution. To do that, you will need to know your variable cost per unit sold (marketing cost + cost of goods). Your contribution margin is the difference between the revenue and the variable cost.

If you sell 20 X’s at $20 per, and the variable cost per unit is $5, then you made $300 in total contribution (20 – 5 = 15 x 20 = 300).

And if you sell 25 X’s at $18 per, with a variable cost still at $5, then you made $325 in total contribution (18 – 5 = 13 x 25 = 325).

So in that case, $18 is a better price.

Is There Such a Thing as Too Much Testing?

If you are a frequent reader of this blog, than you know that I am a big proponent of testing. I use the phrase ‘Always Be Testing’ more than I probably should.

Recently, a reader sent in a question that caught my eye. She asked, “Is there such a thing as too much testing?”

Though my immediate reaction was to say “no”, I had to admit that that would have been wrong. Of course there is such a thing as too much testing. Too much of anything is bad, right?

But instead of leave it there, I decided to dive deeper into the question. If there is such a thing as too much testing, how much is it? How do we know if we’re guilty of too much testing?

I started with the objective – why we test. We test things in order to improve performance. Therefore we prioritize those tests that are A) simple – meaning they require limited resources, and B) have the greatest potential impact. From there, we look at things that are not as simple but still big impact potential.

For most companies, that is a lot of testing. There are probably enough tests to run in those two categories that you will never run out of tests.

But let’s imagine you do. Next you start to look at things that are simple, but don’t have big impact potential. That is the point when you should ask yourself if it is worth testing. If the potential lift in performance from testing is not as much as some other activity, that test is not worth it.

So when you start to prioritize tests over other activities that have more potential to help the company, that is too much testing. As with anything else, you have to manage tests alongside all other uses of your time and your team’s time, and prioritize those things that have the greatest potential.

Statistical Significance, Explained

What is statistical significance?

Wikipedia defines it this way. In statistical hypothesis testing, statistical significance is attained whenever the observed p-value of a test statistic is less than the significance level defined for the study.

In simpler terms, statistical significance is the point at which we can confidently conclude that the results of a test we are running are real, and not just a coincidence.

Why does statistical significance matter?

As marketers, we should love testing. We should test everything.

Some of the most common tests marketers do today include pricing, email subject lines, website (conversion rate optimization), and advertising copy.

Most tests are simple AB tests. We test one version directly against another, and we compare the results. But if you don’t measure for statistical significance, those results might lie to you.

For example, if you don’t have enough visitors to your website to achieve a statistically significant result, the “winner” of your test may be the winner for any number of reasons and not necessarily because of the changes you made between that version and the other.

When you do achieve a statistically significant result, you know with the utmost confidence that the changes you made directly resulted in the improvement or decrease in performance.

Here is a quick calculator for statistical significance that you can plug your test results into.

How to Test Something Halfway and Always Fail

As marketers, we must test. Testing is how we tell if something works or not. It’s how we make effective changes to our campaigns, our websites, our pricing, and more.

Testing is also how we find new markets for growth. We test new audiences for existing products, new products for existing audiences, and more.

We test because it is expensive to fail. And when and where there is a way to try something quickly, for less money, we must do it.

HOWEVER….

And this is a big however. Sometimes, the way we test something is wrong. It’s wrong because it doesn’t tell us anything. It tells us the test failed, but we take that to assume that the whole idea was invalid.

To explain what I mean, take a look at this example that I just made up:

Company X has a successful consumer drone business. They make several models and sell them in the US. Looking for growth, Company X believes that they can repeat their success in other countries. But the investment to set up local business units (to “do it right”, their CEO says) is very high. So they decide to “test” a few markets by simply doing the same thing that they’re doing in the US, but expanding the advertising to a few other countries. It’s quick and easy and should indicate whether or not there is demand outside the US.

The problem comes when this fails. Company X now has to ask themselves a tough question. Did it fail because there is no demand outside of the US and we were wrong about our growth prospects? Or did it fail because we didn’t do it right, we didn’t put up the initial investment required to make it work?

Make sense? The point is a simple one, that we have to understand the limits of testing. It is absolutely the right strategy, most of the time. But in order to get the benefits of testing, we must learn to acknowledge its shortfalls lest we fall into the trap of believing false results.