Archive for the ‘Email Marketing Statistics’ Category
Posted by David McMurray on May 2nd, 2014
As I have mentioned in earlier blogs, 400 returned surveys are enough to adequately represent virtually any large population. Logically you’d think that the required sample would increase proportionately with the population – but this is incorrect..explaining the math behind random sampling is an advanced statistical course. For many years in my Survey Essentials course, I used a “soup” analogy to explain this principal. It’s a practical way to look at sampling.
Let’s say you have one gallon of soup in the pot, and you want to sample it for taste, temperature and ingredients. Intuitively, you’d give the pot a good stir to ensure it’s all mixed together, and take a tablespoon as a sample. Based on that ‘representative sample’ , you’re can decide whether to declare your soup ready for consumption or not. One tablespoon constitutes your ‘representative sample’ and based on the results, further testing is determined to be necessary or unnecessary.
Now suppose you’re making 50 gallons of soup in an extremely huge pot. That’s 50 times more soup than in the previous example. Ready to take the sample? Do you think the sample also needs to be 50 times bigger? After giving the huge pot a stir to make sure it’s all mixed together, what would you grab to take the sample? A tablespoon, a ¼ cup measure, or perhaps a gallon jug? Intuition would tell you that maybe you’d better drink a gallon of soup to be sure it’s ready, since 50 gallons is a big pot of soup. But, this isn’t the case, Just a big tablespoon will do. Sure, for 50 gallons you may use a couple of tablespoons full, but you certainly don’t need a full gallon. The most important detail is that the sample is completely random….like making sure the soup is all stirred up.
Still don’t believe me? You have the choice to take that college course, or you can use any number of on-line sample calculators to do some fact checking. Just Google ‘sample calculators’. When using any of the online calculators, here are the populations and required samples:
||Required Sample Size
You’ll see that the required sample size does not exceed 384, even for populations of 1 billion. We round up to 400 just for simplicity. This is based on a 95% confidence level and a confidence interval of 5%, both of which are industry standards. It’s also for a generally homogenous population. If there are specialized strata, then that must be taken into consideration
So, let’s stop the debate over the required sample size once and for all, the required sample pool never exceeds 384 (rounded up to 400). Now, we can focus on creating valid surveys, conducting appropriate data analysis and effective survey follow-up.
Posted by Bill Leming on February 24th, 2014
We all know that temptation is a powerful force in our personal lives. It’s also a powerful force in our professional marketing lives, particularly when one begins to look at distributions of the number of services per household, the number of individual sales per customer, or the number of sales dollars per customer. In Financial Services and many retail services sectors, the number of single service households is far greater than the number of two service households which, in turn, is far greater than the number of three service households, etc. And that’s where temptation rears its insidious head.
As marketers and as managers we focus on that great big, juicy opportunity of selling a second service to all those single service households. And why wouldn’t we? The number and percentage are typically much larger than any other segment so the opportunity is a huge, ripe plum just waiting to be eaten. By definition they are customers who somehow chose to do business with you, so while they might not be advocates, they’re still customers who must have additional service needs that marketers just haven’t yet satisfied. And then the “numbers” temptation…if we could just get ⅓ or even ¼ of those single service households to use a second service, look at the positive impact on our customer retention rates, our retail asset base, and our bottom line.
The problem is that the cost to sell these single service customers a second service is generally pretty steep (you can quantify exactly how steep it is in any number of ways). What we all know to be true is that the cost to do so is comparatively steep especially when compared to the cost of selling an eight-service household a ninth service.
And that’s exactly where we should begin the up-selling effort; namely where it is most cost effective — and that’s not at the single-service level but rather at the eight-service household or the highest level within your organization. Almost no one has all the services or products you offer, so begin from the top down. Eventually if you follow this process, you’ll get to the single service household, which is what everyone wanted at the outset.
By avoiding the temptation to begin with that juicy single service plum, you’ll have done so with not only an eye toward efficiency but also with the knowledge that we can get to them largely because your cost per service sold was well below what you were willing to pay at the single service household level. You’ll have spent your marketing dollars where the cost per new service sold is lowest first, followed by the next lowest and so on until your cumulative cost per new service/product sold is where you want it to be. In effect you’ll be able to go deeper into the customer file, ultimately down to the single-service customer level because you were so successful at the higher services per customer tiers, because the cost per new service sold at the highest number of services per customer was so low.
But temptation is what it is…tempting.
Posted by Rob Ropars on August 26th, 2011
We’ve all heard that if you’re in marketing, in particular email marketing, you should constantly be testing to maximize results. The most common test mentioned is the ubiquitous “A/B” split test, meaning a 50/50 list split to test one variable against another (graphics, copy, offer, layout, list, time of day, day of week, etc.).
But is an A/B test all you can or should do? If you have only a few thousand or fewer emails to work with, an A/B test may be all you can do to ensure statistically reliable results. However, if your list is too small, an A/B test might not make any sense. For example, if you only have a few hundred email addresses, splitting and conducting one test will literally tell you nothing (statistically) other than directionally relevant information. Instead you may need to try to replicate the test over time, to aggregate the results and to analyze your collective data over a longer period.
The first consideration is to quantify how many email addresses you need to test to ensure you have a representative sample and more importantly, to ensure the results are reliable. There is a lot of math and science behind this topic, and fortunately a lot of math/science/statistics sites have free online tools such as this one.
You must set up the test(s) correctly (with sufficient sample sizes and assumed response rates) on the front end to ensure that results on the back end are reliable, meaning with a confidence level that you’re comfortable with (we recommend a 95% confidence level if it’s possible). Again, there are resources online to assist such as this one. The key is to avoid the common mistake of merely looking at results and assuming winners/losers based on seemingly different response rates.
Before testing, you have to identify the goal or the question you’re trying to answer. We recommend that you actually write these down and then, as briefly and concisely as possible, describe the various yardsticks you will use to determine your winner. As form follows function, the goals/objectives of the test coupled with the means to measure results should help drive copy, graphics, and/or layout to ensure the messages are properly structured and focused on whatever question you’re trying to answer..
Let’s say your goal is a higher click rate and after an A/B test you find “A” has a 2.7% CTR and “B” has 2.85%. It is a common mistake to use subtraction and declare that “B” was the winner or that “B” was only 0.15% higher and that could lead you down the path of thinking it wasn’t a significant result (i.e. a virtual “tie”). Or maybe you routinely just pick the higher percentage as the winner and run with that. Using proper percent increase/decrease calculations, we find that this is actually a 5.56% increase from “A” to “B.”
That however may or may not be statistically significant, but as you can see it’s a much larger increase than originally assumed. In order to determine if the results are statistically significant, use one of the calculators, plug in each version’s list size and the click percentage (or open percentage, or conversion rate, etc. depending on the key metric you’re analyzing) and it will instantly tell you whether this difference is enough to be reliable (with a 95% confidence level).
In this example, let’s pretend I sent “A” and “B” to a random 2,000 people each. The calculations indicate that this would not be enough of a difference to be statistically reliable. In fact, the “B” cell’s click rate would have to have been at least 3.81% in order for the difference to be reliably significant. However, if you didn’t analyze the results properly you wouldn’t know this.
The other way to ensure you’re maximizing your results is to avoid doing a full scale A/B test. If your database for an email marketing campaign is large enough (again calculate minimum sample size), you can do a different kind of split test. First, split your list 10%/90% (ensuring it’s random). Then split the 10% group in half so you have two small splits and the remaining 90%.
Deploy your test to the 10% splits, give as much time as possible for activity to occur (twenty-four hours if possible), analyze the results and then deploy the winner to the remaining 90%. That way you’ve done your best to maximize the campaign’s results without going “all in” on a typical full file A/B split.
As with gambling, learn the rules, do the math, analyze the data and place your bets. Do it right, and the odds will swing in your favor.
Posted by Rob Ropars on June 26th, 2009
In 1594, Shakespeare wrote: “What’s in a name? That which we call a rose by any other name would smell as sweet…”. Although some words may be better than others to convey the meaning of a marketing term, resistance to change can keep lesser-qualified words in place. Take for example the word “open.”
It would seem to be an easy thing to understand. A door is open or closed (unless it’s ajar-sorry old joke), accounts are open or not, etc. When it comes to email marketing, “open” doesn’t manage to fully define what we’re trying to say. Marketers (and those they report to) look to their Email Service Provider’s reporting data to quantify the success of a campaign.
This includes various metrics including: delivery, opens, clicks, bounces and unsubscribes. Savvy marketers know to review not only the immediate results, but performance over time, against similar prior campaigns, and web analytic/ROI data. This provides a fuller measure of how an email performed during its life.
One statistic in the email realm has always tended to raise eyebrows-the open rate. For an ESP, this is currently measured by someone viewing messages in an HTML email. This sounds like a simple process, but there is a catch. It has become commonplace for email clients to have images off by default. Your recipients must take an action to enable images in order to see them. This not only impacts how you should be designing campaigns, but how you interpret the results.
Industry figures vary, but the average “open” rate is often in the 20-25% range. I’ve spoken to several clients over the years needing to reconfirm the meaning of their results after they had presented campaign data internally. If you’re open rate was 20%, an assumption is made that 80% of your list didn’t open the email (i.e. the opposite). Nothing could be further from the truth, but as they say perception is reality.
Posted by admin on December 11th, 2008
Will it Play in Peoria
In years past, prior to the launch of a major nationwide promotion, advertisers (usually packaged goods firms), would often test the promos in Peoria. In fact it is considered the test market capital of the world. Email is the New Peoria, only it is cheaper, and much more accessible. There is little debate that email marketing is one of the most cost effective methods for getting your message out. Email marketing is not however a one size fits all. There are customers and prospects who don’t have email accounts (albeit fewer and fewer these days). There are also, unfortunately, customers who just plain refuse to subscribe to your email communications despite your great opt-in process.
Email can be your greatest ally in creating off-line campaigns that produce better returns than they might otherwise. If you have created a rich email database, you have a powerful sandbox in which to experiment with offers and creative designs that you can translate into off-line campaigns. Sure, the medium is different, but the fundamentals are similar, so email is a great starting point, and in this economy, it can eliminate false starts that can eat into your marketing budget.
So as you start thinking about your next marketing campaigns, start with what I will call email focus groups. Small run tests of the different aspects of your messaging and creative to, as they say, “see how it plays in Peoria.”