Survey Question Design: The Good, Bad and Ugly

Posted by on December 5th, 2014

Vector iconReaching out to customers for their opinions and feedback is vital for any successful organization. However, the process of developing the perfect survey content can be daunting to even the most seasoned CRM or CEM team. This is primarily due to the fact that the survey must collect the appropriate responses in the appropriate manner to provide the most valid and reliable data. The most successful surveys consist of a combination of closed-ended and open-ended questions, which allows for the capture of quantitative data as well as qualitative data.

“Good” closed-ended questions are those that have specific and limited response sets. Examples of these types of closed-ended questions include those with Likert rating scales (e.g.1 = Poor and 5 = Excellent), those with “Yes/No” responses or those with a known definite response set such as gender and age. These close-ended questions are the most direct, straight forward and easily analyzed.

Now, let’s skip directly to the “ugly” closed-ended question response set.   In many cases, a survey question response set will be developed with a finite set of options, whereby all options are not listed. Poorly developed close-ended questions typically involve a categorical response set where all possible answers are not included. For example, “What do you like most about ABC organization?” that includes only the following responses: “Reputation; Location; Pricing.” This design may lead a participant to endorse an inaccurate response since his/her true response might not be listed, which could have quite a negative impact on the resulting data and analysis.

There still remains one level between the “good” and the “ugly”, and that is the “bad.” The most typical mistake in creating a bad response set is usually done with good intentions. This also most frequently occurs with a categorical response set and usually involves the inclusion of the response “Other.” However, this does not completely alleviate the problem of the “ugly” response set, but merely provides the option to endorse “Other” rather than an incorrect response. But, what happens when you conduct a frequency on such a question and have 30% of the participants endorsing “Other?” How do you interpret the data when you have no idea what that “Other” response is?

The way to take both the bad and the ugly response sets and morph them into good response sets is by adding the opportunity for an open-ended response. This can be accomplished quite easily by replacing the “Other” response with “Other, please specify” and allowing a space for open-ended input.


Leave a Reply