Technology’s Impact on Data Collection in Market Research

The widespread adoption of the internet, and now smartphones, has revolutionized all industries with very few exceptions. If you have been in the market research industry for the past few decades, you know first hand how drastically these technological advances have changed the way researchers conduct business.

In the mid to late 80s, gathering survey data by phone and mail had become the standard over surveying consumers door-to-door. Today, however, the declining popularity of landlines and traditional mail has led researchers to place more focus on online alternatives. With its ability to reach consumers at multiple touch points and near instant results, it’s easy to understand why the internet has won data collection dominance. While market researchers have a wide selection of options for collecting data, having reliable research and effective analysis tools has become more vital than ever.

We know that market research technology will continue to evolve. Present and future trends point to social media and user generated feedback where we can analyze what consumers are saying rather than just observing them. The ability to adapt to these new trends will be an important factor in staying competitive and delivering the products and services that consumers want and need.

In spite of all the changes in market research over the past two decades, the objectives remain the same: glean insights from consumers, and respond in a manner that will increase sales and market share.

Dangers of Converting an Ordinal Scale to its Numerical Equivalent

When surveys are executed, respondents are often asked to respond according to an ordinal scale.  They are asked to what extent they agree or disagree with a given statement on a scale reflecting the numbers 1 to 5 as: 1 strongly agree, 2 agree, 3 neutral, 4 disagree, 5 strongly disagree.   This type of data, called ordinal data, is not as straightforward, when conducting your survey results analysis, to analyze as it may first appear.

It is common practice to convert the answers into their numerical values and analyze the data based on the assumption that the resultant numerical equivalents are simple numerical data. This simple conversion violates the rules for analyzing ordinal data but in certain circumstances it is still appropriate.  In others, the result of the analysis may be misleading. The difference depends on the distribution of the response data.

For example in a survey based on an ordinal scale of 1 to 5 with 100 respondents, we begin our survey results analysis by converting the scale points to their numerical equivalents. But,  before we decide to do this we should examine whether the response is a Normal, single-peaked, symmetric distribution or not Normal, meaning if the response is bi-modal (with no central tendency).  Further, in the case of an equal number of responses in each category, typical of uniform distribution, there is again no central tendency.

It is possible and possibly useful to determine the mean in the case of single-peaked symmetric distribution, but when there is no central tendency, as is the case of bi-modal or symmetric distributions, the mean is virtually meaningless.  When the data is Normal, the risks of misanalysis are low but if you want to avoid scale violations such as these there are three possibilities to consider.

  1. Use the properties of multi-nomial distribution to estimate proportion of responses in each category and determine the standard deviation error, or
  2. Convert the ordinal scale to a dichotomous variable and use logistic regression to assess the impact of other variables on an ordinal scale variable, or
  3. Use rank correlation (Spearman or Kendall) to evaluate the association between ordinal scale values.

However, if you want to add together ordinal scale measures of related variables to give overall scores for a concept, then scale violations may be unavoidable.  Be aware, though, that if the response is anything other than approximately Normal, your survey results analysis may be misleading.

Limitations of Spreadsheets as a Tool for Analyzing Survey Data

Spreadsheets suffer from several inherent disadvantages when it comes to performing large scale tasks such as analyzing survey data. The amount of records spreadsheets can handle at any given time is quite limited in comparison to what is required for survey data, for example. Trend awareness is also a complex task which a spreadsheet is just not suitable to perform.

Recent studies indicate that over 90 percent of all spreadsheets contain errors. This amount of potential inaccuracy alone should preclude you from using spreadsheets as a primary tool for analyzing survey data — a task in which accuracy is imperative. Much of spreadsheet’s shortcomings in terms of accuracy stems from the fact that many of them are derived from older versions of other spreadsheets converted to perform a different function. While this tactic may save time in the short term, resources are lost when errors inevitably crop up down the line.

Types of Spreadsheet Errors

There are three primary types of spreadsheet errors which afflict their users with frustration and ultimately incorrect or inadequate information:

  1. Stealth Errors: As their name implies, these errors are especially problematic because of their difficulty in locating them. Often times, data can look reasonable and accurate at initial glance, but upon closer inspection is found to be wrong. The most hazardous aspect of stealth errors is that many are discovered years after they are relevant or sometimes are never discovered at all.
  2. Outlier Errors: These occur when a spreadsheet appears operational, but the results produced are obviously inaccurate. While these are easily found, fixing the calculations or formulas responsible for the error is a tedious and time-consuming task.
  3. Friendly Errors: Calling these types of errors “friendly” is a result of the spreadsheet software discovering them for you; an error message is displayed identifying the offending formula or calculation.

While spreadsheets have secured their place in today’s market place, utilizing them for the purpose of analyzing survey data often falls outside their scope of usefulness.

If you are interested in going beyond the spreadsheet and venturing in to the realm of powerful yet flexible survey analysis software, then look no further!

Text Analytics: Summarizing the #TMRE Hashtag traffic

The TMRE 2011 conference wrapped up late Wednesday afternoon.  Several attendees were actively tweeting to the #TMRE hashtag which many of us followed during the event.

Last evening I sat down with @dwiggen’s twapperkeeper archive to pull an RSS feed of the #TMRE for analysis.  The feed as of last evening covered tweets to #TMRE from 11am Monday through 11pm Wednesday (event time).  In total, 2710 tweets were captured; 513 Monday, 1,340 Tuesday and 817 tweets on Wednesday.

As shown on the accompanying graphic,  the popular “tweeting times” were the 9am and 11am hours on Tuesday and Wednesday in sync with the excitement surrounding the keynote sessions.  Peak tweeting volume occurring at 11am on Tuesday.

Sure, there’s a tag cloud …
The tag cloud graphic below was created using Wordle after parsing the feed to remove sender, hashtags, URLs, and stop words.  Retweets were included within the analysis in an attempt to lend additional weighting to the tags.  Just over 36% of total tweets were RTs.

 

“Most Prolific Tweeter” goes to …

Excluding event host TMRE, InsightsGal took the honors and was also retweeted most frequently in total, but notably VirtualMR was retweeted more frequently as a percentage of VirtualMR’s total tweets.

The two most frequently occurring RTs shared the concept of thoughtful interpretation of your survey research analysis:

  • Outsource process, but never outsource thinking. -Stan Sthanunathan of Coke
  • You can’t have breakthrough insights with people who all think the exact same way.

Taking the analysis a step further, we used an algorithm to select a small subset of tweets that are representative of the entire #TMRE tweet stream.  The algorithm scores tweets based upon their weighted tag values after adjusting for tweet length.  This resultant “tweet brief” provides a quick flavoring of the prominent themes within the overall #TMRE tweet stream:

  • rachel_bell44: If you don’t like change you’ll like irrelevance a whole lot less! – Heiko Schafer Henkel/Dial Corp
  • eswayne: #TMRE a lot of people ascribe Coca-Cola’s success to a TV spot, but the real power is the communities of people surrounding the brand.
  • MattIIRUSA: RT @Ali_Saland: Ask yourself, how have u made it simple 4 the brain 2 process? What have u done 2 make the consumer feel better by buying ur product
  • mg_hoban: TMRE=Take More Risks, Everyone! RT “@LoveStats: TMRE=t-tests manovas regressions experiments”
  • CandiceSeiger: RT @klonnie: #TMRE social media Anthony Barton@Intel small amount of research paranoia over SM/time to insights is far shorter with SM

I hope you enjoyed this summary.  If you attended the conference this week, I would expect that you’ll see familiar concepts captured within the tag cloud and representative tweet content.

If you’d like to see the data cut another way, I’d encourage your feedback, or contact me to discuss how we could apply this form of discovery to the results of your next survey project.

Data Analysis Tools Created with You in Mind

Today’s data analysis tools are more sophisticated and robust than ever before.  But complexity is not necessarily better.  If a tool is too complex it looses efficiency until it is no longer cost effective to use.  Think Space Shuttle.  What is needed are user friendly data analysis tools.

User friendly data analysis tools are tools that you, the end user, can employ to accomplish analysis tasks quickly, easily and efficiently. They work intuitively.  They do not require extensive training.

For many analysts, data visualization is the best method for understanding and communicating complex data relationships. The human mind identifies and recognizes patterns more intuitively when presented visually through charts and graphs as opposed to presenting data points and lists.

But simply presenting data visually is only half the challenge. Creating those charts and determining which are relevant is a time consuming and manually laborious task that’s prone to input error.

Answering that challenge is what mTAB™ and it’s companion product mTABView™ have been doing for years.

Ready-to-use data

mTAB’s™  sophisticated database compression technology quickly and easily turns raw data into ready-to-use data. This powerful feature means engaging in hands-on analytics without extensive training and without relying on your vendor to do the heavy lifting, setting up and manipulating the data.

In today’s cost sensitive business environment, many company’s seek user friendly data analysis tools that allow analysts to spend more time analyzing and less time parsing raw data, creating charts and preparing reports.  mTAB™ is designed with the spreadsheet user in mind.  Its user interface will be familiar to anyone who has ever used a spreadsheet thereby creating a synergy between the data, the interface and the analyst, resulting in higher productivity and deeper analysis.

Create, update, present

Once the initial analysis is done and it’s time to create a meaningful report, the task of manually generating multiple charts can be daunting to even the most experienced analyst. That’s where mTABView™ comes into play.  It automates the creation and update process by linking the survey data directly with the charts.  Update the data and the charts are automatically updated.  There is nothing more user friendly than automation and the ultimate in ease-of-use is mTABView’s™ one click export to PowerPoint.  With just one click your tables, charts and graphs are loaded into fully customizable PowerPoint slides.

And that is only the surface!

PAI to co-sponsor Research Club TMRE 2011 gathering

 

The Research Club was started 6 years ago in London for the purposes of networking like-minded research professionals in the field of marketing research and strategic planning.

This November, we are lucky enough to become associated with ‘The Market Research Event 2011′ by IIR.  We are very excited at the prospect of this association and look forward to large turnout.

Research club gatherings are very informal, no presentations or sales pitches, just the perfect way for you to meet and learn more about your market research industry peers while enjoying complimentary drinks and appetizers provided by the event sponsors.

Each Research Club event operates within the following simple principles:

  • Anyone associated with the marketing research industry is welcome.
  • No hard sell practices.
  • No speeches or formal presentations.
  • Our only objective is for you to have a good time and meet as many of your peers as possible.
  • Lastly, we ask that you spread the word about the Research Club!  Click here to view the comments of recent attendees.

Please join us as our guest at this informal gathering of your marketing research peers.

WHEN: November 8th 2011, 7:00pm – 10:00pm

WHERE: Garden Terrace, The Peabody Orlando

SPACE IS LIMITED – SO PLEASE REGISTER AS SOON AS POSSIBLE AT: www.theresearchclub.com/events/orlando/

To learn more about the Research Club, please visit www.theresearchclub.com