An Intro to the American Customer Satisfaction Index

What do airlines, large banks and telecom corporations have in common? They are among the least-liked companies in America.  How do we know? The American Customer Satisfaction Index (ACSI) tells us so. It’s the only uniform national measure of satisfaction with goods and services across a representative spectrum of industries and the public sector. The ACSI utilizes patented methodology to identify factors driving customer response and applies a formula to determine the cause-and-effect relationship between those factors and satisfaction, brand loyalty and overall financial health of a company.

ACSI data allows companies to reach informed decisions about current products and services and also make projections about changes under consideration. It’s a tool for managers to improve satisfaction and build customer loyalty and a means to evaluate competitors. ACSI scores also help investors evaluate the present and future potential of a company. Historically, stocks of companies with high ACSI scores outperform lower-scoring firms.

Developed by researchers at the University of Michigan and first published in 1994, the ACSI releases full results on a quarterly basis with monthly updates. The survey rates satisfaction with 225 companies in 47 consumer industries and more than 200 programs and services provided by federal agencies.  Data about customer satisfaction is gathered from random telephone and email interviews with 250 customers. To generate ACSI results, over 70,000 interviews are conducted each year. Consumers respond to questions about a company by rating three factors on a 1 to 10 scale: Overall satisfaction, fulfillment of expectation and relative comparison to an ideal product or service. Companies are chosen for scoring based on total sales and position within their industry. As company fortunes wax and wane, some are deleted from the survey and others added.

In addition to rating individual companies, the Index generates overall scores for 43 industries, 10 economic sectors plus a comprehensive national customer satisfaction score—now considered a significant metric for the health of the economy at large.

The scores from the American Customer Satisfaction Index are awaited by companies, economists, investors and government agencies alike. Some of the general conclusions gleaned from the results include:

  • Variations in customer satisfaction indicate the mood of consumers and accurately predict their readiness to buy products or services.
  • Since consumer spending makes up the majority of the national gross domestic product (GDP), spikes or dips in ACSI scores serve as an early warning to fluctuations in GDP.
  • Quality, not price, is the primary factor generating customer satisfaction in most industries scored by the ACSI.
  • High-profile mergers, acquisitions, large layoffs and other internal uncertainties degrade a company’s customer satisfaction score.
  • Service industries are generally positioned for lower ACSI scores than the manufacturing sector.

Around the world, many countries are implementing surveys based on the ACSI model. In the future, ACSI methodology may evolve from a one-nation metric to a global quantification. As national economies expand into worldwide markets, international data on consumer satisfaction and a company’s—or a country’s— relative success in fulfilling it will prove vital.

Survey data analysis: Drill beneath the Dashboard

Corporations rely on dashboards, which have become the defacto tool for monitoring all aspects of the business enterprise. This includes critical consumer insights metrics such as net promoter score and customer satisfaction, which are normally derived from survey programs. While dashboards are convenient and easy to read, they are not a replacement for a market research analyst’s “deep dive” understanding of the survey results.

Dashboards offer limited drill down functionality by presenting a closed end list of pre-wired data classifications, such as the region / district / zone roll up or heavy / medium / light users, of a product or service. Ultimately the dashboard service allows you to view the data in any way that the closed end “box” permits.

As a consequence, dashboards offer a very convenient, easy to use and graphically pleasing “30,000 foot” view of survey program results.

While dashboards allow senior management to observe at a glance that the train hasn’t derailed, they’re not capable of revealing new insights and greater understanding of the data.

As research professionals, we’re tasked with taking a harder look at the results of our survey programs. We’re responsible for providing the insight and discovery that cannot be obtained through dashboard tools. We need to know how to ask meaningful questions of our survey results, and to have the tools on hand to easily and conveniently obtain the answers to our questions.

Who is going to tell management “why” the train derailed after it’s reported by the dashboard?

The process of drilling into the data by asking meaningful questions, which leads to new and more refined questions, ultimately results in new insight and discovery.

It is our job, as research professionals, to continually remind senior management of our value proposition by providing the insight and understanding that is buried within our survey program results, which can only be obtained by “swimming with the data” via the drill down process.

If you would like to learn more about the process of analyzing your survey results, please call on us to help you get started.

Survey data analysis – Text Analytics for Net Promoter Surveys

As a consumer insights manager at a national retailer, you are responsible for understanding the wants and needs of your customers.  You’ve followed the sage advice of the Net Promoter Score (NPS) experts, and you’ve implemented an online NPS survey capturing “Would recommend..” that allows customers to leave open-ended comments.

In our prior posts we’d discussed how to join internal “structured” data such as retail location, transaction date/time, payment type, check amount, etc. to your survey results, segmenting the NPS scores to identify  the customer segments requiring the most attention.  But how do you make sense of a 100,000+ open-ended customer comments?  

You can segment your open-ended or “unstructured” customer comments using your “structured” data in the same manner as your NPS scores.  For example, we could focus our review of customer comments to the customers paying with a credit card within the Eastern region reporting an NPS score of 50 or less. Unfortunately, national retailers will likely find that they still have hundreds to thousands of comments to review even within very focused segments.

We can use text analytics to help make sense of survey comments without having to read every individual comment. Term frequency (TF), or the percentage in which a selected term occurs, helps gain an understanding of the important concepts underlying the comments, just like you would use the mean of a series of numbers to gain an understanding of a numerical series.  Term frequencies can be conveniently analyzed using a selectable tag cloud graphic as we’d illustrated in a previous post.

When TF analysis is combined with segmenting the comments using our “structured” data segments, we can compare the term frequencies between various segments to gain an understanding of the relative importance of key concepts within each segment.

We can further refine term frequency analysis by normalizing our TF terms using inverted document frequency (IDF) analysis. TF-IDF analysis goes a step beyond TF analysis by considering both the frequency of occurrence of a term with a individual comment (TF) as well as term’s frequency of occurrence within either all comments or a segment of comments.

We can then use TF-IDF analysis as the starting point for determining the relative similarity of comments. This additional step allows us to call out a small set of comments, for example, the top 20 comments, that are the most representative of the larger group. These top 20 comments serve as an “executive summary” of the overall set of comments, summarizing the key topics and issues without the need to individually review the entire series of comments.

We would welcome the opportunity to help you to reveal the hidden insights within your NPS survey results.  Please follow us on Twitter http://www.twitter.com/mTAB to receive updates to our ongoing survey analysis series as well as more information pertaining to our mTAB survey analysis service.

Enhancing Your Net Promoter Score Survey Analysis

Imagine that you are a retailer tracking your net promoter score (NPS).  Your NPS survey is administered by way of an invitation printed on your in-store POS receipt.

The point of tracking NPS is to identify how to make improvements in your service and products.  Changes in your process can then be validated by continual tracking of the NPS metric.

So how can we best identify opportunities for improvement from the NPS tracking graphic below?

We can simply observe that the NPS score has gone up or down relative to our last observation.  Assuming at least one open-ended question in addition to our net promoter survey question, we can review the open-ended or “unstructured” data from customers providing low (or high) NPS scores.

Text analytics tools, for example tools providing classification and sentiment analysis of unstructured survey responses, along with search term and tag cloud drill down tools, can significantly enhance the understanding of the unstructured survey responses.

Additional structured data, coupled with the appropriate tools for the analysis of structured and unstructured survey questions, will significantly enhance our analysis and thereby greatly increase the value of our net promoter program.

POS receipts typically include a transaction identifier that can be used to link a wealth of structured data to the survey results.  Listed below is a partial listing of possibilities:

  • Store Location
  • Transaction date/time
  • POS terminal and sales associate
  • Inventory of items purchased
  • Items purchased on sale or promotional items
  • Payment type
  • Total amount of purchase (total check)
  • (assuming you have a frequent buyer program) buyer profile

Without adding questions to our NPS survey, we can now break down scores by region, district, zone and store, by day-part, by payment type, by check amount, by store department and potentially by frequent vs. infrequent shoppers.

If you would like to review your NPS program with industry experts  please sign up for a web meeting or simply visit our web-site to learn more about our survey data basing and analysis services.

Understanding your survey’s Net Promoter Score calculation

Net Promoter Score (NPS) is a popular customer loyalty tracking metric that is frequently included within consumer surveys.

NPS is based upon on a simple premise; growth of your business is directly related to your customer’s willingness to “promote” or recommend your product or services to others.  Think of NPS as a “bottom line” tracking metric that summarizes your customer’s experiences and loyalty into one understandable and explainable measure.

The NPS calculation starts with a survey data containing the question “On a scale of 0 to 10, what is the likelihood that you would recommend this (product or service) to a friend or relative?”

The score behind the NPS is calculated by subtracting the percentage of the bottom 7 box responses, in other words those indicating they would be very unlikely to recommend, from the percentage of the top 2 box responses, or respondents that would be very likely to recommend, as depicted within the illustration below.

NPS scores can take on any value within the range of 100% to -100%, with 100% being the desired or objective NPS.

The point of collecting and tracking NPS metric is to take action when the NPS data suggests room for improvement.   An NPS score in isolation has limited value; perspective is required to bring meaning to the data.   Perspective can be obtained by tracking of the NPS score over time, by comparing the NPS score with NPS scores of similar products or services, or by segmenting the survey data by respondent subgroups and comparing the NPS scores of the subgroups.

Segmenting of the respondent data is made possible through secondary data that can be linked to the respondent (example: receipt transaction number linking to CRM data) or through additional structured and unstructured data presented to the respondent within the survey questionnaire.   It is extremely important to include the appropriate survey questions that will provide for a meaningful segmentation of the NPS results.

For example, you would want to know the NPS differences between heavy user and light user segments, especially if heavy users comprise the vast majority of your current sales volume.   If you don’t have a way of determining segmentation from your survey or secondary data sources, then you may be missing out on a key opportunity to gain additional insight from your NPS data.

There are many other factors that direct relate to the value of NPS metric such as sampling, sample size, and the percentage answering the NPS survey question.

PAI would welcome the opportunity to demonstrate how PAI’s mTAB™ service would benefit your understanding of the meaning implications of your NPS metrics.  Please visit the PAI website to schedule a no-obligation review of your current NPS program.