Analyzing survey results: filtering with multiple response questions

Filtering is one of the most common and useful means of analyzing survey data, focusing the analysis of survey results to a particular subgroup of survey respondents, such as first time purchasers or families with children under twelve.

Analyzing survey results with filtering can become complicated when multiple response (i.e. check all that apply) questions are included within the filter criteria.  Consider the multiple response survey question illustrated below:

Now consider defining filter criteria with your survey analysis software, checking the McDonald’s and Wendy’s restaurant checkboxes as shown above.

The survey analyst needs to understand that the subgroup of respondents passing the filter criteria may have visited other, and even potentially all of the other, quick service restaurants. A more accurate definition of our example filter would be that the subgroup of respondents visited AT LEAST McDonalds or Wendy’s.

Survey analysis software should equally support identifying respondents that indicated McDonalds and Wendy’s as their ONLY visits, without the need to construct complicated filter criteria that lists all of the individual restaurants.

Software tools well suited for the analysis of survey data will support more complicated filtering criteria such as “visited at least two Mexican food themed restaurants”, “visited at least one seafood themed restaurant and only seafood themed restaurants”, “visited only burger restaurants” and “visited McDonalds and Wendy’s but not Burger King”.

PAI’s mTAB survey analysis software has built in support for multiple response questions, which greatly simplifies the analysis of survey data containing this common survey questionnaire type.

When multiple response questions are selected for respondent filtering, mTAB automatically exposes the multiple response question filtering options that facilitate the example analyses listed above.  For a more detailed illustration of mTAB’s multiple response filtering, please visit the mTAB knowledge base article respondent filtering with multiple response questions.

Survey data analysis – Text Analytics for Net Promoter Surveys

As a consumer insights manager at a national retailer, you are responsible for understanding the wants and needs of your customers.  You’ve followed the sage advice of the Net Promoter Score (NPS) experts, and you’ve implemented an online NPS survey capturing “Would recommend..” that allows customers to leave open-ended comments.

In our prior posts we’d discussed how to join internal “structured” data such as retail location, transaction date/time, payment type, check amount, etc. to your survey results, segmenting the NPS scores to identify  the customer segments requiring the most attention.  But how do you make sense of a 100,000+ open-ended customer comments?  

You can segment your open-ended or “unstructured” customer comments using your “structured” data in the same manner as your NPS scores.  For example, we could focus our review of customer comments to the customers paying with a credit card within the Eastern region reporting an NPS score of 50 or less. Unfortunately, national retailers will likely find that they still have hundreds to thousands of comments to review even within very focused segments.

We can use text analytics to help make sense of survey comments without having to read every individual comment. Term frequency (TF), or the percentage in which a selected term occurs, helps gain an understanding of the important concepts underlying the comments, just like you would use the mean of a series of numbers to gain an understanding of a numerical series.  Term frequencies can be conveniently analyzed using a selectable tag cloud graphic as we’d illustrated in a previous post.

When TF analysis is combined with segmenting the comments using our “structured” data segments, we can compare the term frequencies between various segments to gain an understanding of the relative importance of key concepts within each segment.

We can further refine term frequency analysis by normalizing our TF terms using inverted document frequency (IDF) analysis. TF-IDF analysis goes a step beyond TF analysis by considering both the frequency of occurrence of a term with a individual comment (TF) as well as term’s frequency of occurrence within either all comments or a segment of comments.

We can then use TF-IDF analysis as the starting point for determining the relative similarity of comments. This additional step allows us to call out a small set of comments, for example, the top 20 comments, that are the most representative of the larger group. These top 20 comments serve as an “executive summary” of the overall set of comments, summarizing the key topics and issues without the need to individually review the entire series of comments.

We would welcome the opportunity to help you to reveal the hidden insights within your NPS survey results.  Please follow us on Twitter to receive updates to our ongoing survey analysis series as well as more information pertaining to our mTAB survey analysis service.

Survey data analysis – using graphic visualization as a tool

We’d previously reflected on the opposing roles of visualization and presentation graphics; let’s now examine how visualization graphics can help us analyze survey data.

Visualization graphics can be used to quickly identify outliers within large quantities of data. Our brains are wired to recognize graphic differences in shape, magnitude, and direction more readily than we can recognize the equivalent differences within a table of numbers.

“Outliers” occur when the data visually rises above or below the average or the “noise” within the results. Outliers can serve as the source of the stories that an analyst constructs to offer understanding and explanation of the survey results. As an analyst you should be asking yourself “why?” when you observe an outlier.

The more data you include within your visualization, the greater your odds of observing outliers. There is nothing wrong with creating a visual “rats nest” of lines or bars as part of the visual analysis. Your objective is to skim the edges of the data, ignoring the bulk of data that represents the average; you are visually filtering out the noise to identify the data observations that stand out from the fray.

The radar chart below is a good example of a visualization of a large number of data points (156) summarizing hundreds of thousands of survey responses. Using this radar chart format, we can identify interesting outliers at a glance, much more conveniently than we could by studying a table or even a bar chart representation of this same data.

Here we are observing purchase decision importance survey questions results between six different brands rated by the survey respondents.

At a glance we note the red line, represented of Brand E, displays considerable lower ratings than all other brands (i.e. “the noise”) for the respondent’s consideration of manufacturer’s reputation, prestige of the product, prior experience with the manufacturer and technical innovations. Brand E may be a relatively new brand in the marketplace as purchasers did not consider reputation, prior experience or prestige as important criteria within their decision process.

Alternatively, the blue line represented by Brand A sits above the noise for attribute fun to drive, but below the noise for attributes seating capacity and cargo space. Brand A may represent a manufacturer of sporty products that emphasize fun over practicality.

Using this visualization, we have quickly identified outliers and have constructed hypothesis that we can then test and explore with our survey analysis drill down tools.

In future posts, we will illustrate how to summarize the information we’ve gleaned from our visualization into a presentation graphic display, allowing us to communicate the story of our data to our customers.

Enhancing Your Net Promoter Score Survey Analysis

Imagine that you are a retailer tracking your net promoter score (NPS).  Your NPS survey is administered by way of an invitation printed on your in-store POS receipt.

The point of tracking NPS is to identify how to make improvements in your service and products.  Changes in your process can then be validated by continual tracking of the NPS metric.

So how can we best identify opportunities for improvement from the NPS tracking graphic below?

We can simply observe that the NPS score has gone up or down relative to our last observation.  Assuming at least one open-ended question in addition to our net promoter survey question, we can review the open-ended or “unstructured” data from customers providing low (or high) NPS scores.

Text analytics tools, for example tools providing classification and sentiment analysis of unstructured survey responses, along with search term and tag cloud drill down tools, can significantly enhance the understanding of the unstructured survey responses.

Additional structured data, coupled with the appropriate tools for the analysis of structured and unstructured survey questions, will significantly enhance our analysis and thereby greatly increase the value of our net promoter program.

POS receipts typically include a transaction identifier that can be used to link a wealth of structured data to the survey results.  Listed below is a partial listing of possibilities:

  • Store Location
  • Transaction date/time
  • POS terminal and sales associate
  • Inventory of items purchased
  • Items purchased on sale or promotional items
  • Payment type
  • Total amount of purchase (total check)
  • (assuming you have a frequent buyer program) buyer profile

Without adding questions to our NPS survey, we can now break down scores by region, district, zone and store, by day-part, by payment type, by check amount, by store department and potentially by frequent vs. infrequent shoppers.

If you would like to review your NPS program with industry experts  please sign up for a web meeting or simply visit our web-site to learn more about our survey data basing and analysis services.

Visualization techniques – graphic visualization vs. presentation

You’ve just completed fielding your latest survey research project, now you need to analyze the data and communicate the results.  As a complement to cross tabulation and statistical analysis of the survey data, you’ll likely incorporate graphics within your analysis and reporting process.

In fact, you should consider incorporating graphics in two different ways; using graphics as a visualization tool to assist with your analysis, and graphics as a presentation tool to clearly communicate the results of your analysis.

These two separate functions or “graphical roles” require different types of graphics as well as different methods of viewing or analyzing the graphics.

Graphic of my LinkedIn network – useful as a visualization tool, but not as a presentation tool.

Visualization graphics typically incorporate a relatively large number of data points, enabling the analyst to view respondent segments, trends or outliers in a manner that may not be obvious from the viewpoint of tabular reports.

Presentation on the other hand, embodies the the art of communication, and presentation graphics are therefore designed to quickly and persuasively depict a key point or conclusion from the analysis of the survey results

Visualization graphics are typically “noisy”, containing lots of information and capturing several dimensions including multiple axes, quadrants, or variations of text or data point sizes.  Visualization graphics typically require careful study and consideration to identify their underlying meaning.

On the other hand, presentation graphics need to immediately convey the point that the graphic is making.  They are purposefully succinct and focused, and as such presentation graphics typically avoid multiple dimensions.  Good presentation graphics preclude the need for supporting explanatory text to convey their message.

The experience analyst will be thinking of presentation graphics at every step of the data analysis process, including while utilizing visualization graphics as an analysis tool.  The point of the analyst’s effort is to ultimately communicate the results of the analysis to busy decision makers in a manner they can readily comprehend.

Stay tuned to this blog as we explore new and interesting ways to use graphics to both analyze and present the results of survey data.

If you’d like to learn more about how to prepare survey data for analysis, please download our whitepaper  “10 essential prerequisites for survey data analysis”.

Understanding your survey’s Net Promoter Score calculation

Net Promoter Score (NPS) is a popular customer loyalty tracking metric that is frequently included within consumer surveys.

NPS is based upon on a simple premise; growth of your business is directly related to your customer’s willingness to “promote” or recommend your product or services to others.  Think of NPS as a “bottom line” tracking metric that summarizes your customer’s experiences and loyalty into one understandable and explainable measure.

The NPS calculation starts with a survey data containing the question “On a scale of 0 to 10, what is the likelihood that you would recommend this (product or service) to a friend or relative?”

The score behind the NPS is calculated by subtracting the percentage of the bottom 7 box responses, in other words those indicating they would be very unlikely to recommend, from the percentage of the top 2 box responses, or respondents that would be very likely to recommend, as depicted within the illustration below.

NPS scores can take on any value within the range of 100% to -100%, with 100% being the desired or objective NPS.

The point of collecting and tracking NPS metric is to take action when the NPS data suggests room for improvement.   An NPS score in isolation has limited value; perspective is required to bring meaning to the data.   Perspective can be obtained by tracking of the NPS score over time, by comparing the NPS score with NPS scores of similar products or services, or by segmenting the survey data by respondent subgroups and comparing the NPS scores of the subgroups.

Segmenting of the respondent data is made possible through secondary data that can be linked to the respondent (example: receipt transaction number linking to CRM data) or through additional structured and unstructured data presented to the respondent within the survey questionnaire.   It is extremely important to include the appropriate survey questions that will provide for a meaningful segmentation of the NPS results.

For example, you would want to know the NPS differences between heavy user and light user segments, especially if heavy users comprise the vast majority of your current sales volume.   If you don’t have a way of determining segmentation from your survey or secondary data sources, then you may be missing out on a key opportunity to gain additional insight from your NPS data.

There are many other factors that direct relate to the value of NPS metric such as sampling, sample size, and the percentage answering the NPS survey question.

PAI would welcome the opportunity to demonstrate how PAI’s mTAB™ service would benefit your understanding of the meaning implications of your NPS metrics.  Please visit the PAI website to schedule a no-obligation review of your current NPS program.

Understanding your survey’s Net Promoter Score calculation

Net Promoter Score (NPS) is a popular customer loyalty tracking metric that is frequently included within consumer surveys.

NPS is based upon on a simple premise; growth of your business is directly related to your customer’s willingness to “promote” or recommend your product or services to others.  Think of NPS as a “bottom line” tracking metric that summarizes your customer’s experiences and loyalty into one understandable and explainable measure. Continue reading