Archive for the ‘Survey Tips’ Category
Posted by Nick Murphy on December 21st, 2015
Imagine seeing a piece of art that is so beautiful it captures your attention, heightens your senses and provokes a feeling. These emotions are not directly connected to survey design, but just like captivating art, if you create a survey that touches a nerve, you’ll find yourself in a position where your customers will want to answer your questions. But where do you begin?
1. Think about the story you want to write. Before you can even begin to put together a survey, research the motive behind the creation of it. Really pin down what it is you are looking to accomplish and create questions your customers can answer that will help you do so. Forget about simple yes and no answers. It’s important for you understand why a customer feels the way they do. Ask them why and don’t only focus on negative scores. Positive feedback is just as beneficial. According to Huffington Post, “If a customer gives you the highest rating, you need to know why, so you can replicate that same experience and outcome with other customers and clients.” (1)
2. Build your survey upon a fundamental aspect of growing your business. One of the most central questions you can ask is, “How likely is it you would recommend this company to a friend or colleague?” Harland Clarke Digital recommends using the “Net Promoter Score” method. Extensive research as shown that, “Net Promoter Score®, or NPS®, acts as a leading indicator of growth. If your organization’s NPS is higher than those of your competitors, you will likely outperform the market.” Not only does this question address the participant’s interest in your company, it also indicates your value and their loyalty. (2)
3. Don’t overcomplicate your questions. Using technical jargon, run-on and complex sentences, only makes the survey participant more agitated and less likely to complete your survey. According to Harvard University, “Words used in surveys should be easily understood by anyone taking the survey. Examples: “Do you support or oppose tort reform?” “Should people held on terror related crimes have the right of habeas corpus?” (3)
Understanding these main principles of survey design is just a stepping stone into the world of knowing and understanding your customers. Forms + Surveys from Harland Clarke Digital makes it easy for you to communicate with your customers and gather valuable insights that can help your organization grow.
To learn more, contact our HCD Support team at 630-303-5000 or simply e-mail your questions to email@example.com.
Posted by Margaret Henry, Ph.D. on April 7th, 2015
One of the most important issues when conducting survey research is to determine the number of returned survey responses necessary to produce valid and reliable results. To this end, we need to consider statistical significance.
To meet these requirements, the accepted percentage for both the confidence level and confidence interval must be determined. The confidence level describes the uncertainty of a sampling method. The generally accepted confidence level utilized in survey research is .95 or 95 percent. Basically, meeting this confidence level indicates that if the study was to be conducted 100 times, the results would fall within the same margins 95 percent of the time.
The confidence interval, also referred to as the margin of error, denotes the range of acceptable error for the data. The most readily accepted margin of error for survey research is five percent, whereby the percentages of data results are known to fall within this margin of error.
If we were to conduct a survey and receive a response sample large enough to meet both the accepted confidence level and confidence interval, we can assume the following:
- If we were to conduct this survey 100 more times, we would produce the same results 95 percent of the time with the true percentage falling within a range of -5 to +5 percent of the identified percentage. This is the acceptable confidence level and interval to statistically consider the collected data both valid and reliable
Once the confidence level and interval are determined, then the required sample size can be determined. The formula to calculate sample size requirements for statistical significance takes into account many factors, and the calculation is neither intuitive nor linear. Typically, the lower the population size, the higher the percentage for the required sample size. For example, a population of 100 individuals would require a sample size of 79 responses. However, at a certain point, the sample size necessary to meet statistical significance in terms of representing the entire population reaches a maximum of 384 (many researcher round the number to 400).
A practical example of the interpretation of confidence level and confidence interval would be if you were to survey a population and receive the appropriate number of responses to meet the 95 percent level and five percent interval requirements then you would be confident that the data was both reliable and valid in understanding the results in the following manner:
A statistically significant number of participants responded the following question:
“How satisfied are you with Product X”?
If 80 percent responded that they were satisfied, then you can be assured that if you were to ask this same question to the required number of individuals in a given population 100 times, the results you would consistently achieve would be that satisfaction would be reported by between 75 percent and 85 percent, for 95 of the administrations.
Posted by David McMurray on March 6th, 2015
Historically, most customer opinion/satisfaction surveys were conducted via paper. More recently, many institutions prefer to use email distribution that links to a web-based survey. Email distribution offers many advantages over paper, most notably:
- The lower costs, which includes printing, outgoing and return envelopes and postage, form scanning, hand data entry and written comment transcription
- Email appeals to many respondents as being more ecologically responsible
- Can generate a quicker survey response compared to paper
Harland Clarke Digital’s Research & Insights, we often include a web link on the printed-paper survey, offering the respondent the opportunity to respond to the survey online, rather than returning the paper survey — thus eliminating all the costs mentioned above. The web link can be a generic URL, but it is recommended that it be a PURL, which is a personalized link tied back to a specific respondent.
Looking at eight recent paper-based projects that also offered the option to respond to the survey online via a ULR/PURL, the impact of the web-response option varies widely — anywhere from 5% to 34% of the respondents opted to use the URL/PURL.
So the question is, what are the benefits of using URL/PURLs over only offering a paper survey? The “cost” savings will add up, which could amount to hundreds of dollars in savings on a modest-sized project. The additional benefit of printing the ULR/PURL is the quickness in which the survey is completed, which means the data is available much sooner.
A URL/PURL printed on the survey may not dramatically impact the response rate, but it definitely will not decrease it. It will, however, lower administration costs associated with processing the returned surveys. And, based on the eight projects I looked at, you can count on between 5% and 17% (34% may be exceptional) of the respondents opting for the online response option and consider how much money that could save you in the long run.
Posted by Margaret Henry, Ph.D. on February 5th, 2015
As a continuation of the previous article, “Survey Question Design: The Good, Bad and Ugly,” this article addresses the pros and cons of utilizing open-ended questions in a survey. An open-ended question, also called an “infinite response question,” is an unstructured question for which predefined options or response choices are not offered. Rather, the respondent answers the question in his or her own words. This type of question is exploratory in nature and designed to collect narrative data. Open-ended questions usually begin with the words, “what,” “why,” or “how.“ An example of a typical open-ended question is, “What factors would lead you to recommending your bank to a family member or friend?”
The good aspects of including open-ended questions in a survey are:
- They capture the respondents’ responses in their own words
- They obtain more in-depth information since there are no limits to the possibilities of responses
- They can provide unanticipated findings
- They tend to provide greater insight
The bad aspects are:
- Open-ended questions tend to take up more room on a survey
- They most often require more time to answer
- The responses may not be relevant
- The data they provide is complex and highly time-consuming to analyze
The type of narrative response data collected from open-ended questions is called “qualitative data,” since the information cannot be measured. Rather, each response must be independently read to gather the information being provided by the respondent. Depending on the sample size, this endeavor may take a significant amount of time to complete. However, the investment is worthwhile, since the information provided is likely to offer unique and valuable information directly from the minds of your customers.
Another way to analyze the data that is typically used is to transform the collected narrative qualitative data into quantitative form that can be measured and compared. In order to do this, the analyst will separate the narrative responses into defined groups.
Possible responses to the open-ended question example provided above might include:
- Reputation of the bank
- Great rates
- Excellent customer service
- Convenient bank locations
- Online banking capabilities
- Mobile banking capabilities
- Fraud protection
- Does not make recommendations to family members or friends
The analyst would read through all of the responses and separate the response into the appropriate group. Each response would count as one point and each group would eventually produce a count based on the number of responses separated into each group. Once this is accomplished, the responses from the open-ended question can be measured, compared and analyzed further.
However, this approach threatens to lose the richness of the information the open-ended question was designed to capture. Therefore, the data collected from open-ended questions is most effective when analyzed from both a qualitative and a quantitative perspective.
The next article will investigate another approach by which to capture in-depth, insightful, unanticipated and informative data using only open-ended questions — the Focus Group.
Posted by David McMurray on January 15th, 2015
In the last two months, Harland Clarke Digital’s Research & Insights worked with two financial institutions to implement a comprehensive body of surveys to measure multiple aspects of their account holders’ experiences. Typically, most financial institutions just focus on one or two surveys, but in these cases, there were ten areas where each financial institution wanted to focus its efforts, including teller transactions, online and mobile banking and investments services among other things.
Initially, both clients envisioned an ongoing transactional-based approach for each survey. Ongoing transactional surveys are administered after account holders engage in a specific transaction, like opening a new account or using an ATM. From there, weekly data feeds allow Research & Insights personnel to measure account holder’s opinions and provide valuable feedback and recommendations to executives.
For some transactions, the numbers were quite high, but for others, it was the opposite. This made it very difficult to provide adequate analysis. Upon consultation with Research & Insights, it was determined that some surveys would be better and simpler as a single-wave survey, rather than an ongoing one.
Ongoing transactional surveys are most effective when the focus is on the quality of the transaction and there is variability in the quality of deliverability, such as every time an account holder calls the customer contact center or goes into the branch and speaks with a financial professional. This type of survey is beneficial, because it can provide actionable data on what is trending month-to-month, which will help monitor the quality of service delivered by multiple bank personnel.
Single-wave surveys are best when the focus is more on product features and functionality, rather than service and delivery. They are also effective when the product and service functionality is relatively static, such as the mobile banking experience, first and second mortgages and credit card services. Utilizing single-wave surveys simplifies the overall process and reduces the time and expense of an ongoing approach.
Other factors that should be taken into consideration when deciding between ongoing-transactional and single-wave surveys are:
- Sampling strategies and length of time to obtain adequate data. If transactions are not frequent, you have the option to wait and group them in one larger single wave or administer them in smaller, ongoing distributions.
- How quickly the survey should be distributed after the specific transaction. Teller transactions need to be sent quickly, while mortgage surveys could probably be sent within a month of the transaction.
- Costs. Ongoing surveys likely incur higher operational costs (downloading contact lists, smaller distributions, smaller data compilations and more numerous reports, etc.).
- Manpower. Ongoing surveys will take more administration hours.
- Survey methodology. Paper surveys benefit from larger distributions, while online and personal telephone interviews are easier to do in small batches.