GM Unhappy with Facebook Ad ROI

The most anticipated tech stock to date debuted after weeks of speculation as to where the final numbers of its IPO meter would land. Coming out of the social media sector; Facebook’s decision to become a public company may have been the biggest news of the year. Unfortunately for the advertising giant, disappointing news about one of the largest U.S. advertisers is making a big splash in the media world as well: automaker General Motors has decided to withdraw its $10 million spend on Facebook ads due to subpar results of their click-through metrics and marketing data analysis.

The burning questions all revolve around the single word ‘Why?’ While there has been lots of speculation as to why GM experienced lackluster results from Facebook ads, we wanted to share some of the key points that we feel merit a closer look.

1. Is it Facebook, or did the ads themselves contribute to the sub-par click-through performance?

Although the nature of GM’s ads likely had a part to play in the lack of success, Facebook doesn’t get off scot-free. According to recent studies, nearly 60% of Facebook users say they never click on ads or sponsored content. Eye-tracking research shows a decrease in visitor interest in advertisements that are shown in the Timeline feature rolled out a few months ago.

2. Is click-through the best measure of Facebook ad performance?

Some important details left out of all the media reports is what other, if any, key performance indicators was GM using to measure the success of Facebook ads. Click-through rates can be a bit tricky when it comes to social media; which is a notably different advertising platform than Search. Ad click-through rate needs to be considered alongside other metrics, depending on the marketing objectives. Other valuable metrics may include the number of Facebook fans or engagement rates.

3. Is GM more disappointed with the lack of new ad innovations for Facebook advertisers?

Facebook has been known to place more emphasis on the platform’s user experience over advertising innovations. The social media giant has yet to prove that creating a solid advertising model is high on their list of priorities. In fact, Facebook currently does not even support advertising on smartphones or tablets, one of the fastest growing segments for reaching consumers.

4. Did GM’s multi-agency approach to Facebook advertising play a factor in poor performance?

Facebook claims the lackluster ad performance could be due, in part, to GM’s multiple agency approach. While GM budgeted $10 million to Facebook ads, they spent $30 million on agency costs. Multiple agencies working within a single channel can lead to major inefficiencies and a disconnect in strategy. If we had more behind the scenes details on how GM’s Facebook strategy materialized, I’m relatively certain we’d discover some level of culpability here.

GM’s decision to cut ad expenditures doesn’t appear to be a knock specifically on their confidence in Facebook. The auto giant plans to continue focusing on its free Facebook presence and continue engaging consumers. Perhaps Facebook should be charging for site privileges for companies who do not pay any advertising costs. Maybe we will see new Facebook user categories in the future: free or paid.

Market Research Data Barriers worth Combating

As discussed in previous posts on big data and data overload, an effective data management and analysis plan will make all the difference between actionable insights and information stockpiling.  Data is vital and all businesses depend on it to make the right decisions going forward. However, there comes a time when the data becomes too much and you are faced with data overload.

If you are in market research, you are constantly faced with the challenge of focusing in on the right segments of data. It can be hard to sort through the 2.5 quintillion bytes of data that is created each day by Internet users alone. Therefore it is important that you understand big data barriers.

First, there are three main attributes to describe big data:

  • Volume: The amount of information on the Internet is immense; from people posting links to their favorite blogs to customers writing reviews. This data is freely available for you to turn into valuable information.
  • Velocity: The rate at which data enters the information highway is phenomenal. You have to be constantly updating trends and reevaluating assumptions.
  • Variety: Data comes from many sources and consists of both structured and unstructured data.

If you do not have an effective plan in place to manage and analyze data, it can quickly escalate into data overload. Some common barriers experienced in market research include:

  • Data paralysis: It is easy for a business to be overwhelmed by all the data that they have at their disposal. Without an analytics program, data becomes impossible to act on. The data is left unchecked and being a burden more than a gift.
  • Expense: Efficient software, hardware and human resources are needed to make the best use of data. For some businesses, there might not be enough resources to allow for the proper use of data.
  • Data privacy: You might be able to gather the data, however, you might be concerned about how you can utilize it. Due to the sensitivity of certain types of data, your company may worry about litigation proceedings from the same consumers you are trying to please.
  • Real-time: Data is added to the Internet in fractions of a millisecond. It can be difficult for your business to keep up with the ever changing scope.

Despite the many challenges that come with data, the benefits of having an effective data management and analysis plan pay off. You will gain the visibility into your competitors, marketplace, and consumers–the visibility needed to position your business as a leader in innovation and consumer approval.

An Intro to the American Customer Satisfaction Index

What do airlines, large banks and telecom corporations have in common? They are among the least-liked companies in America.  How do we know? The American Customer Satisfaction Index (ACSI) tells us so. It’s the only uniform national measure of satisfaction with goods and services across a representative spectrum of industries and the public sector. The ACSI utilizes patented methodology to identify factors driving customer response and applies a formula to determine the cause-and-effect relationship between those factors and satisfaction, brand loyalty and overall financial health of a company.

ACSI data allows companies to reach informed decisions about current products and services and also make projections about changes under consideration. It’s a tool for managers to improve satisfaction and build customer loyalty and a means to evaluate competitors. ACSI scores also help investors evaluate the present and future potential of a company. Historically, stocks of companies with high ACSI scores outperform lower-scoring firms.

Developed by researchers at the University of Michigan and first published in 1994, the ACSI releases full results on a quarterly basis with monthly updates. The survey rates satisfaction with 225 companies in 47 consumer industries and more than 200 programs and services provided by federal agencies.  Data about customer satisfaction is gathered from random telephone and email interviews with 250 customers. To generate ACSI results, over 70,000 interviews are conducted each year. Consumers respond to questions about a company by rating three factors on a 1 to 10 scale: Overall satisfaction, fulfillment of expectation and relative comparison to an ideal product or service. Companies are chosen for scoring based on total sales and position within their industry. As company fortunes wax and wane, some are deleted from the survey and others added.

In addition to rating individual companies, the Index generates overall scores for 43 industries, 10 economic sectors plus a comprehensive national customer satisfaction score—now considered a significant metric for the health of the economy at large.

The scores from the American Customer Satisfaction Index are awaited by companies, economists, investors and government agencies alike. Some of the general conclusions gleaned from the results include:

  • Variations in customer satisfaction indicate the mood of consumers and accurately predict their readiness to buy products or services.
  • Since consumer spending makes up the majority of the national gross domestic product (GDP), spikes or dips in ACSI scores serve as an early warning to fluctuations in GDP.
  • Quality, not price, is the primary factor generating customer satisfaction in most industries scored by the ACSI.
  • High-profile mergers, acquisitions, large layoffs and other internal uncertainties degrade a company’s customer satisfaction score.
  • Service industries are generally positioned for lower ACSI scores than the manufacturing sector.

Around the world, many countries are implementing surveys based on the ACSI model. In the future, ACSI methodology may evolve from a one-nation metric to a global quantification. As national economies expand into worldwide markets, international data on consumer satisfaction and a company’s—or a country’s— relative success in fulfilling it will prove vital.

The Challenge of Data Overload

The world seems so much bigger today than it ever was.  According to reports, more than 1.2 zettabytes of digital information was created in 2010. The availability and importance of data is increasing at staggering rates; yet, at the same time, it costs billions of dollars to control. Companies that have placed little investment into data analysis resources struggle with data overload–unable to take advantage of the information available.

Overcoming Data Overload

Most companies have no problem admitting to the paralyzing effects of having too much data. Business leaders face major challenges in the decision-making process when data overload exists. In an effort to minimize the caustic effects of data overload, it’s important to define which facts are critical to move the business forward.  Without set parameters around analyzing data, business leaders have no means of knowing what data is valuable and what data can be ignored.

The following suggestions can help make data analysis more manageable:

  • Determine your company’s information needs on a daily, weekly, or monthly basis.
  • Select the KPIs that matter most to your business.
  • Identify specific financial drivers  – such as customer satisfaction and loyalty.
  • Make information available in a visually appealing format.
  • Ensure that your analytic tools can leverage all available information.

While these are simple and effective steps to improving how your company utilizes data, they do not replace the need for high quality data analysis tools and professionals.

Determining Demand with Statistical Demand Analysis

Today we’re going to do some thinking and learn about statistical demand analysis.  Why is statistical demand analysis important? Because estimating product demand is essential to reduce the risks inherent with pricing, production and inventory decisions. A failure to accurately estimate demand can lead to over or under production, and if priced incorrectly can negatively affect profits.

How is demand determined?

We’ll talk about two types of demand: Consumer demand and Market or Industry demand.

  • Data on consumer demand is usually obtained through consumer surveys by asking how much they would purchase at a given price point (predictive) or how much they have purchased in the past (retrospective).
  • Market Demand is looked at through a series of price changes over time and analyzing the subsequent change in purchasing activity.  Finally, and the point we will focus on, using naturally observed data on price and quantity and subjecting them to statistical demand estimation.

Statistical Demand Analysis using Liner Regression Model

Recall that regression analysis is a statistical technique for estimating the relationship between variables.  In this case, Demand as a result of price vs quantity sold, city, production, consumer demographics, etc.  Demand, being the dependent variable, is shown in relation to and linked to various parameters (aka assumptions) explaining the relationship.  This is shown by the following linear regression model:

Q = a + bP + cM + dPR +u
Q = quantity demanded (observed) and is the dependent variable
P,PR,M = price, price of related goods, income (observed) = independent variables.
u = random error (not observed)
a,b,c,d, = parameters – unknown and estimated.

Using regression analysis the analyst first obtains a set of survey data (regression data) for Q, P, Pr and M.  Next an estimation of a, b, c, and d is used.  These estimations are difficult and rely on experience, industry knowledge and internal data. When done, you test for significance of the parameter values and using the estimated values of the parameters for analysis, are able to predict the future value of the dependent variable, e.g. demand.

There are three types of regression data: Time series, Cross section and Panel data.  Time series is an observation made of different time periods while Cross section is for different observations at the same point in time.  Panel data is a combination of both.  These are the Data for Q, P, Pr and M.

Specify the Demand Model

Here we’re talking about the unknown or estimated parameters. It is up to the analyst to determine if she will rely on consumer behavior, price, income, price of related goods, size of market, tastes, price expectations etc.  Then, choosing a functional form, usually a choice between liner form and power function.

Linear form looks like this:

Qx – a + bPx + cPr + dM + eN + u

The parameters in this linear function tell the change in Qx for 1-unit change in the associated variable.  It looks like this”

b = ?Qx/?Pr ; c = ?Qx/?Pr; d = ?Qx/?M; e = ?Qx/?N

Determining demand is not an exact science and the accuracy of any prediction is based, in large part, on the experience, knowledge and industry expertise of the analyst.  This is true because of the assumptions the analyst must make in determining the unobserved and unknown variables present in the linear regression analysis.

Linear regression analysis for statical demand gives a more accurate picture of future demand rather than relying on crude projections based on past sales.

We did not cover the power function which is used frequently if the analyst believes demand is non linear.  If anyone can tell me how the power function would look, feel free to comment.  See you next time in Making Molehills out of Mountains University, Market Analysis 101.

Mind Your Measurement Scales in Market Research

Welcome to the Making Molehills out of Mountains University (MMoM U) Market Research Data Analysis 101 or MARDA 1 as we like to call it in the halls of academia.  Today we discuss the four different types of scales used in measuring behavior.  Open your books and let’s get started…

The four scales, in order of ascending power are:

  • Nominal
  • Ordinal
  • Interval and
  • Ratio

Nominal Scale

Nominal is derived from the Latin nominalis meaning “pertaining to names”.  But, seriously, who cares? That tells us nothing except how much academics love showing off.  The Nominal Scale is the lowest measurement and is used to categorize data without order.  For your market research data analysis exercise a typical nominal scale is derived from simple Yes/No questions.

How the nominal scale (and all these scales) is used statistically is for the next lecture.  For now, just know the behavior measured has no order and no distance between data points. It is simply “You like? Yes or no?”

Ordinal Scale

From the Latin ordinalis, meaning “showing order”… Enough of that.  An Ordinal Scale is simply a ranking.  Rate your preference from 1 to 5.  Careful!  There’s no distance measurement between each point.  A person may like sample A a lot, sample B a little, and C not at all and you would never know.  Here we have gross order only, learning that the subject likes A best, then B, then C.  Determining relative positional preference is a matter for the next scale.

Interval

Ah, the Interval Scale.  It’s the standard scale in market research data analysis.  Here is the 7 point scale from Dissatisfied to Satisfied, from Would Never Shop Again to Would Always Shop,  etc.  The key element in an Interval Scale is the assumption that data points are equidistant.  I realize savvy market analysts might say, “Hold on Professor. What about logarithmic metrics where the points are not equidistant?” To which I say, “Correct! but the distances are strictly defined depending on the metric used, so don’t get ahead of yourself. This is MARDA 101.”

For now, understand that with the Interval Scale, we can interpret the difference between orders of preference.  Now we can glean that Subject 1 Loves A, Somewhat Likes B and Sorta Kinda Doesn’t Like C.

Subject 2 Somewhat Likes A , Sorta Kinda Doesn’t Like B and Hates C.  Both subjects ranked the samples A, B, & C on an Ordinal Scale but for very different reasons as discovered by using the Interval Scale. Got it?  Good.

Moving on.

Ratio

Similar to the Interval Scale it’s not often used in social research.  Like Interval, it has equal units but it’s defining characteristic is the true zero point.  Ratio, at its simplest, is a measurement of length. Even though you cannot measure 0 length; a negative length is impossible, hence, the true zero point.

To sum up, I leave you with the the chart below, indicating various measures for each scale.


Difference
Direction of Difference
 Amount of Difference
Absolute Zero
 Nominal  X
 Ordinal  X  X
 Interval  X  X X
 Ratio  X  X X X




The Difference Between Causation and Correlation Research

Correlational research can tell you who buys your products, but it may or may not tell you why. For example: Let’s say that you are trying to sell instant meals. If your research tells you that working mothers buy more instant meals, you cannot draw the conclusion that being a working mother causes people to purchase instant meals. The purchase of instant meals could be due to a third factor such as being too busy to cook, having extra money, or trying to control calorie intake.

More complex correlational research studies can help you narrow down the causation, but you still cannot draw a conclusion from them with absolute certainly. In fact, the more factors you add, the more confused you may become. Let’s say you discover that only single working mothers purchase instant meals and that married working mothers rely on their spouses to cook. Should you try to market instant meals to the spouses of working mothers, or could there be yet another factor involved that is still eluding you?

Knowing for sure

If you really want to know whether being a working mother causes people to purchase more instant meals, you need to conduct a controlled experiment. This means that you would need to have a large pool of subjects, randomly assign them to be working mothers, and monitor their purchasing habits. Unfortunately, we cannot ethically force people to be or not to be working mothers. This is a lifestyle choice, and an experiment where we let the subjects choose their own treatment would not be an experiment at all.

A good way to dig deeper in your research without dealing with ethical issues is to ask an open-ended question. It may take you more time to read through all of the responses, but with the right software you can make the process much faster. You can find out how often a certain keyword or phrase is used to determine why people purchase instant meals. If a common response is, “I am a working mother,” you can be reasonably certain that being a working mother causes people to purchase instant meals. While this is still technically a correlational research study, it can give you more useful information than a survey in which you only allow simple, discrete responses.

Extend Market Research with Technology

For your business to flourish, it should have the right amount of customers to purchase the product or service offered. Analyzing your customer base, then defining your marketing research technology efforts to those specific customers is essential, in addition to having a transparent view of how your expected customer base will enhance your business possibilities for success. By delineating your target consumer, you will:

  • identify the potential customer base for your business
  • refine your business strategy and products to meet your customer needs
  • aim your marketing ideas to touch your more hopeful prospects
  • establish your marketing messages suitably

With its capacity to extend to consumers at various touch points and acquire nearly instantaneous results, there is no secret to why market research has taken to the Internet so well. Even though market researchers have a broad range of options for gathering data, acquiring reliable research and efficient analysis instruments has become even more crucial than before. It is no revelation that market research technology will resume its evolution. Current and potential trends are directed towards social media and user engendered feedback where you can evaluate what consumers are saying compared to only watching them.

Your ability to acclimatize to the new market research technology trends will be a vital component in remaining competitive and providing the products and services that consumers are looking for. Resources of technology interrelated with telepresence, neuroscience, eye tracking, and inherent association analysis will as well influence marketing research moving ahead. For instance, companies have utilized telepresence to locate isolated and hard to reach respondents. In addition, voice-activated video with audio, concurrently share laptop screens, and organization to consult with multiple locations at once with a technology that produces a distant participation milieu.

A dilemma with the various types of market research technology is that it cannot solve the question of why one brand has a stronger presence than another one. This type of innate question must still be answered. Neuroscience has of late been utilized to answer market research questions in ways that social media and traditional methods have been limited. However, prognostic analytics, eye tracking, association dimensions, and neuroscience technology motivated systems will on no account substitute for speaking to customers to comprehend what they truly feel and consider when presented with brands and various other manufactured rudiments.

4 Main Types of Segmentation in Market Research Analysis

Segmentation is the process of dividing potential markets or consumers into specific groups.  Market research analysis using segmentation is a basic component of any marketing effort. It provides a basis upon which business decision makers maximize profitability by focusing their company’s efforts and resources on those market segments most favorable to their goals.

There are four main types of segmentation used in market research analysis: a priori, usage, attitudinal and need.

a priori (most commonly used)

a priori is defined as relating to knowledge that proceeds from theoretical deduction rather than from observation or experience. For purposes of market research analysis this means making certain assumptions about different groups that are generally accepted as pertaining to that group.  For example, deducing that adults over 50 are not as tech savvy as twenty somethings is a safe assumption based on the reasoning that high tech devices were not as widely available to the older generation than they are to the younger. However, be careful to check your assumptions since they can change over time. In 30 years, that statement may no longer be true.

Usage Segmentation (also used frequently)

Usage segmentation is completed either by decile or pereto analysis. The former splits the groups into ten equal parts and the latter distributes according to the top 20% and the remaining 80%. Usage segmentation helps to drill down more deeply then a priori because it indicates which priori group is the heaviest user of your product.

Attitudinal (Cluster analysis)

Using cluster analysis to create customer psychological profiles is difficult because it is limited by the input data used.  Demographic data is the least helpful, whereas preference data (scaling) is better suited toward this type of analysis.

However, once a usage segmentation is created, it’s exceptionally helpful to know the motivating factors behind the purchasing decisions of the heaviest users of your product.

Needs Based Segmentation

Needs based segmentation is the concept that the market can be divided based on customer need.  This type of analysis is used to develop products that sell rather than trying to sell products a business developed.

Needs based segmentation uses conjoint analysis to separate the groups according to functional performance.  Using cluster analysis, it’s goal is to determine the driving forces behind the performance data.

Knowing which segmentation to use is often as critical as the analysis itself because it is driven by cost and the stated business goals of the decision makers.

Presenting Analytics to the C-Level Executive

For many analysts, presenting your data analysis and reporting to C-Level executives is challenging and often frustrating. The company invests a lot of time, money and effort in the process. The data is clean, the proper conclusions are drawn and the information is communicated and visually supported by charts, graphs and tables in a bang up presentation.  So why is it often ignored or given little weight in the decision making process?

The answer is found in the disparity between the mind set of the analyst and that of the executive.  The analyst looks at data, cleans it, looks at it again, thinks about the data, thinks about the problem, looks at the data again and comes to a conclusion.  The decision is driven by the data, not by the analyst. It is very logical.  Not so with C-level executives.

To say that C-level execs make decisions based solely on their gut feeling or past experience is to do them a disservice.  Although this often appears to be the case, most execs make decisions, whether they realize it or not, based on the axiom, what’s good for the company is good for me provided (this is important) the decision is one I can personally benefit from (increase in power, acknowledged success, etc.) or one I can deflect if the decision proves detrimental. Data analysis and logic have little to do with it.

As an analyst you have a greater chance of C-level execs accepting your data analysis and reporting if you understand these 5 rules.

  1. Less is more.  State your conclusion first but don’t say how you came to that conclusion.
  2. Stay on point. Don’t provide information that doesn’t directly support your conclusion.
  3. Numbers are boring. Execs will only “hear” a number that has a dollar sign in front of it.  So don’t use them.
  4. Speak in plain English.  No jargon. If you must use a technical term explain it briefly and succinctly (without using other terms to describe that one!)
  5. Give’em what they want, leave’em wanting more. Present charts and graphs only if the exec asks for them (have them in a packet if presenting to a group).

Although the data is clear, let the exec ask questions that leads them to the conclusion you already know. It empowers them, allowing them to decide what’s best for the company rather than having the data force their hand. Being heard will empower you.