Interpreting data is hard – even for experts.
Figuring out how to interpret marketing data is not easy.
There has been a lot in the news, lately, about whether classic scientific studies are as replicable and accurate as people think. And the idea of “p-value=.05,” a hallmark of statistical credibility, is being re-considered as potentially causing too many false positives.
If even the experts struggle to interpret their data, what chance do marketers have?
Think like an epidemiologist.
Before I entered the world of marketing, my field was epidemiology. Epidemiologists study the incidence and distribution of disease; we help construct and interpret scientific studies. Obviously, a background in study design and analysis is helpful when you look at marketing data. So, let me teach you some of the basics.
First and foremost, approach data interpretation as a process, not a one-off.
You are not likely to get a clear answer to a big question with a single test. More likely, you are going to run a series of tests to hone in on your results, and to verify whether the results you first saw are accurate.
Remember, too, that there is a big difference between analyzing click-through rate trends on your PPC ads and testing that a new landing page design will increase demo requests.
The former is going to require time data and basic statistics like means and differences. You will bring your knowledge of your business’ seasonality and competitors to your interpretation.
Hypothesis testing, on the other hand, means you think you know what will help your landing page or email perform better, and you are setting up a test to see if you are correct. This is what marketers do when they conduct A/B testing.
Here are the essentials for how to interpret marketing data …
Sample Size
The biggest problem marketers have is too small a sample size. The thing to remember is that you are never testing for “truth” in these kinds of data tests. All you can gauge is the likelihood that the result you are seeing is due to random chance. That’s it. And the smaller your sample size, the less reliable your results.
For healthcare technology marketers, this is a big problem as your websites do not generate traffic like Zappo’s or H&M. When you have very little traffic, you do not have a large enough sample to test your hypothesis.
Bias
In epidemiology, bias refers to an error in the way you design or implement your study. It can prevent you from seeing any real effect, and it can give false positives. That means you think you are seeing a “real” result, but it’s an anomaly of your test design.
Selection bias means that the way you have chosen your test groups has distorted the result. For instance, let’s say you run an A/B test for your new landing page on the first 1000 visitors to your site. However, you discover later that, due to a conference you are sponsoring, all those people happened to be Nursing Supervisors. That means that your result might be true of Nursing Supervisors but not your target audience, overall. TIP: To prevent selection bias, define your test group criteria carefully before running the test. Consider randomizing the test groups, if possible.
Information bias means that the way you collected your information differs between test groups, thereby distorting the result. This can take different forms. For instance, observer bias can occur if you are interviewing focus groups but the interviewer asks more involved questions of radiologists than internists. There is also recall bias, in which people remember things differently based on anything from the severity of a situation to wanting to look better in front of the interviewer. For instance, someone whose cloud storage system has crashed may remember details differently than someone whose system has always worked well. By the same token, people may report healthier lunches than they actually ate. TIP: To prevent information bias, use standard questions for all interviews. Where possible, use documents to verify information, as opposed to relying on memory.
Confirmation bias means that you are seeing what you want to see. Maybe you wrote the landing page copy so you, unconsciously, want to see it do better than the previous version. This could lead you to ignore data favoring the original version, or to interpret borderline results in your favor. TIP: Have more than one person review the data.
Confounding
Confounding means that a third factor related to what you are studying is modifying your result. For instance, let’s say your test shows that your new landing page converts very well with the CNOs on your list but not with the CEOs. You may think this has to do with CEO interest in your product – but it turns out, your CEOs are all color-blind and could not see the color changes you made. In this case a third factor – color blindness – was related to your audience segments and impacted your result. TIP: To prevent confounding, stratify your data to see how other factors may be impacting your results. In this example, you would stratify by color blindness and by job title.
Think about your test, not just what you are testing.
To interpret your marketing data accurately, you need to think beyond what you are testing.
You need to think about the way in which you are conducting your test.
In addition to the issues highlighted above, ask yourself whether you are running so many tests that you are seeing false positives. If you are using conversion optimization software, double-check that your testing tool is functioning and that you have set up your test correctly.
Whether you are analyzing trend data, setting up split tests or reviewing focus group interviews, start thinking like an epidemiologist. You will have more insight into your data – and you will make stronger decisions for your marketing.