Marketing data is great… it can help you make the right business decisions. But if you aren’t careful, it can also lead you the wrong way. Listen to this episode of Marketing in Minutes to find out when marketing data is not helpful.
This podcast was produced with the following equipment:
Don’t always trust the marketing data – and know when the data can be trusted.
In this episode of Marketing in Minutes, you’ll learn:
- When marketing data could be wrong
- How much data you need before you can trust your data
- Why you cannot trust yourself if you want to learn from the data
All in under 10 minutes.
More Information on using Marketing Data
Here is where you can find out more about collecting and using marketing data:
And for a low-budget complete marketing and traffic strategy, read our book, The Social Traffic Code:
You can read the full episode transcript below:
Data is very important for modern marketers.
But you have to be careful not to throw common sense out of the window just because data appears to tell you something.
Often the data tells a different story then what you see in your mind. Or it doesn’t tell you anything at all.
I’m Jonathan Gebauer and this is Marketing in Minutes.
Welcome to Marketing in Minutes – the podcast that gives you everything you need to know about one Marketing topic per episode.
I’m Jonathan Gebauer, and today – let’s talk about some examples in which data wasn’t helpful for marketing.
“You always have to look at the data – measure everything.”
That’s advice that I give every single online entrepreneur who is just starting out.
Data is great. Collecting and reviewing data on key metrics can tell you where your business is headed, it can show you what you are doing right and what you should improve, and looking at data you can decide how to approach the next task in your business journey.
But data is also a very dangerous companion for marketers.
In 2015, an entrepreneur I was working with told me: “Newsletter marketing can’t work for us. We’ve looked at it, and we have very low open rates and a zero percent click rate.”
He didn’t say: “We didn’t get it to work,” he said: “It can’t work for us.”
I didn’t doubt the data – but I doubted that the data really meant what that entrepreneur thought it meant.
And sure enough, when I looked at the data myself, it told me a different story:
They had a couple of hundred subscribers, acquired through various sources over the course of something like 6 to 12 months.
In the whole time, they had sent 3 emails. All emails were announcements type emails – similar in wording to a traditional press release.
And: No email had a specific call-to-action.
Most of the subscribers on their list were at least a couple of months old before they received the first email from that company.
By that time, most of them had forgotten that they signed up for anything.
And people who receive newsletters that they don’t remember signing up for are less likely to open and more likely to mark your email as spam.
But even those who did open the emails just got a very clinical communication – and almost no reason to click.
Needless to say that their open and click rates weren’t surprising to me any more.
This is a classic example of reading your data wrong – instead of deciding: email marketing doesn’t work for us, the decision should’ve been: We need to send better emails more often – and see how it goes then.
But the truth was that they were uncomfortable with writing emails to a list – as many people are. And they looked at the data and found that it seemed to suggest what they wanted it to suggest: That it wouldn’t work for them anyway.
This is one of the biggest problems when looking at data you have gathered: You want to read something into it – and you have to be very careful not to do that.
In this case, the result they got went against common sense – email marketing does work and it’s still one of the best performing marketing tactics. So, If it doesn’t work for you, there needs to be at least a specific reason for that, and you shouldn’t write it off until you know exactly why it doesn’t work.
And until you know that reason, you run test after test after test.
Which brings us to another problem with data – and that is how to run tests and when to decide whether a test was successful and you have sufficient data.
The most common tactic that is used for testing different marketing assumptions is A/B testing. A/B testing means you run 2 variants of a specific marketing asset by your audience and track which variant performs better.
For example, if you have a landing page, you may run 2 different headlines against each other and see which headline leads to more engagement or conversions. One part of the audience sees the first headline, the second part of the audience the second.
Here is the problem: conversion rates for these types of tests are usually between 1 percent and 10 percent. Which means that if you run the test on 100 people, you only expect between 1 and 10 conversions.
A/B testing is often used to test minor changes between two different variants – which also means the change in the conversion rate will also be small. And that, in turn, means you have to wait a long time before you see a clear result that allows you to make an informed choice between the two variants.
When you are starting out and are only getting about 50 to 100 clicks per day, you probably don’t have the time to wait for 1000 to 10000 clicks before you make the choice. So A/B testing becomes a very tricky tool that probably won’t help you a lot.
You can still use A/B testing to gather data – but you have to use bigger changes in the experiment to identify bigger changes in the conversion rates. You may be able to test two completely different landing pages against each other and after a couple of days, you may see that one definitely performs better than the other.
But you won’t be able to identify whether a red or a green button performs better.
If you try to identify the effect of granular changes in your marketing data through A/B testing, you need more traffic and more data.
Data is not as easy to read as many marketing pros want you to believe. There are many problems with data, especially if you are still starting out, don’t have a lot of traffic and can’t gather data quickly.
The most successful marketers use data, and they use data a lot, but they are not totally fixated on data. If the data goes against common sense, they look twice and will run additional tests before they believe the data they have.
And very often, data that doesn’t make sense has a simple explanation: For instance, if a landing page converts at a very low rate, and you expected it to convert at a higher rate, you should first check whether every button is working, and whether all your integrations are set up correctly before blaming the landing page itself.
That’s all I have for you today – I hope you enjoyed this episode of marketing in Minutes. If you like this podcast please share and subscribe, and if you don’t, let me know how I can improve.
If you want to read more about using marketing data and how to use it, check out the show notes which have a few helpful links to more information. You can find them at blog.thesocialms.com/mim-21.
And for a complete low-budget marketing strategy that will give you enough free traffic to run any A/B test you want, check out our book, The Social Traffic Code.
I’m Jonathan Gebauer and you’ve been listening to Marketing in Minutes.
That’s it for today – take care, bye!