CX.AI -Experience the new generation of CX Insights

All Posts By

Lilla Szücs

Does it Really Costs Five Times More to Acquire than to Retain a Customer?

Does it Really Costs Five Times More to Acquire than to Retain a Customer?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: January 25, 2022 * 5 min read

Since the late 1980s, several sources claimed retaining a customer is 5X cheaper than acquiring one. Later a Harvard Business Review article “The Profitable Art of Service Recovery” rephrased the myth with a simple calculation based on a set of assumptions. Since then any random management consultant has been quoting this “mind experiment” as scientific proof. The theory behind it is so pervasively intuitive that it has stood unchallenged for more than 30 years. To my knowledge, it has never been scientifically validated.
It Poses Even the Wrong Question

What are the costs of retaining a customer? Isn’t having an acceptable product/service and acceptable support typically what retains them? 

Most customers are inert. They change only if there is a strong reason.

How much of your core service can be really attributed to retention?

Same with customer acquisition. There is a share of customers who come by word of mouth or they find you on their own by researching hard enough. 

How much of new customer business is really the outcome of an investment in customer acquisition (marketing and sales)?

This comparison is not just unfair. It is not even relevant.

Why? A theory is only relevant when it informs a decision. Are you really considering closing customer support or marketing and sales entirely?

No, you don’t. The decision is whether to invest more or less in retention or acquisition.

This means: comparing customer acquisition costs with retention costs answers the wrong question.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

It’s not about costs, its about ROI

Is this a win when a bank wins two retail customers but loses one affluent client?

It’s a huge loss.

Of cause it depends on what kind of customers you acquire or retain and which products you sell to new vs. existing clients.

So when doing the math, it’s only worth the work if you look at both sides of the coin.

The right question to pose then is

“Does it has Five Times higher ROI to Retain than to Acquire an Additional Customer?”

Join the World's #1 "CX Analytics Masters" Course

Do Not Trade CX with Acquisition

“How well does a knife cut meat?” Answer: It depends. Blunt knives do not cut at all. “Whats an impact of advertising?” Answer: It depends. Bad ads have no impact at all.

What’s the impact of customer retention initiatives?

We at CX.AI did the exercise to measure the ROI of CX several times for clients and compared it with customer acquisition. 

When modeling on real customer data, you can quantify the impact of certain retention or acquisition actions.

With this, you can calculate the ROI of those actions. Here is what we learned:

Customer loyalty initiatives vary largely in it’s ROI. A typical range is anything between 0 to 10X payback of the investment.

Customer acquisition ROI is much better understood. As a rule of thumb, 5X is what can be expected. But still, practices easily vary from 0 to 10X as well.

As you can imagine the actual picture heavily depends on the domain and industries. Markets with huge growth have much lower customer acquisition costs and businesses in saturated markets. 

The same is true for customer loyalty. According to research from the worlds largest marketing institute Ehrenberg-Bass, the market leaders always have higher loyalty. If not, they do not serve the same market.

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

Does CX Make Any Sense Than?

If driving loyalty is not necessarily more impactful than acquisition, when increasing market share always comes with a higher loyalty, then, is managing your customer experience really important.

The simple answer is: YES.

It doesn’t need a comparison with marketing and sales.

What it takes is an estimation of the ROI (plus the risk profile) of potential CX initiatives. (this is what CX.AI has now integrated as a feature)

Senior leadership should ask in the same way it asks for CX ROI then also for marketing and sales initiatives ROI in order to pick the highest ROI strategies. 

The simple decision logic is to pick those initiatives with the highest ROI / Risk ratio and to not simply believe CX or Marketing has an impact. It only has an effect if done with mastery.

I recently hosted a webinar on this topic. Happy to share the link to the recording (frank@cx-ai.com)

If you want to deep dive into this, our CX Analytics Masters Course takes 5h to guide you through the how’s, tricks, and trades.

# # #

What is your take? 

What do you miss in this article?

Let me know and I will improve 😉

Thanks so much,

Frank (frank@cx-ai.com)

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

My TOP 3 CX Learnings from 2021

My TOP 3 CX Learnings from 2021

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: 09.01.2022 * 5 min read

I run the world’s #1 CX Analytics course, but I still learn new stuff about CX every year. Our amazing customers are those who challenge us with real-world problems that need an answer. This curiosity, the persistence, and the faith our customers put in us let us find a solution and keep us me learning.
Here I am sharing my favorite three eureka moments of the year with you.

The CX Insights Manifesto

How To Discover Hidden Truths from Customer Feedback Designed To Get Leadership Buy-in and to Move The Organization.

LEARNING #1: Reporting raw feedback to frontline results in wrong learnings

Every second customer feedback of SONOS mentioned the great sound as the reason for their loyalty. The feedback was transparently distributed throughout the organization. 

When reading, everyone in the organization was learning the obvious. Focusing on sound quality was key, where the investments needed to go.

This was a fact. Wasn’t it?

Unfortunately, facts do not equal truth.

Sure, most customers mention the great sound. But this is what pops up in your mind when you, as a customer, get asked an NPS question. In any domain, customers are biased to mention the mutual property of the domain product. 

For restaurant customers, mention “great taste.” Do you think Mcdonald’s is the market leader because of its taste?

For washing machines, customers mention “washes well” while actually, nearly all devices wash well.

Customers praise “great service” for service businesses, while the worse airlines like Delta or Ryanair have the largest growth.

It takes a driver analysis – best done with so-called “Causal machine learning” to understand which customer topic truly matters.

Many enterprises have already run some driver analysis. But providing customer feedback “as is” to the frontline will make this analysis redundant.

Because the front line will read customer feedback and extract its own -wrong- lessons.

If it is wrong to look at the frequency of mentionings for the insights department, it is wrong to look at individual cases simply because all that happens with the reader is that he is implicitly counting topics.

In this article, I discuss the problem and suggest a solution FIXING-INNER-LOOP

Join the World's #1 "CX Analytics Masters" Course

LEARNING #2: The No.1 pain point of CX Insights Professionals is to get leadership buy-in

When consuming content at industry content platforms or conferences, you could get the impression that the key is to have the best tools to become a successful insights professional.

While this might be true, it is not the topic representing the most significant pain of client-side researchers.

In May 2021, we did a large industry study interviewing CX insights professionals. We asked many questions and started and ended with open-ended questions on which issue moved them most.

The result was overwhelming. It’s the challenge to get leadership buy-in, move the organization, and get other departments to act on insights.

This means Insights does not have a tool problem. It has a self-marketing problem.

Part of it is to tailor its service to become more relevant to the internal audience. Do stakeholders want and need an insights report? Or do they instead love to get an evidence-based recommended next action?

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

LEARNING #3: Great insights are mostly not converted into great decisions.

SONOS found in its CX survey that the new Voice Assistant was still not very widely used. But driver analysis found that those who used it had become huge SONOS fans.

The insight was clear. Increase awareness and applications awareness of the voice assistant feature.

But the decision behind it is a multimillion advertisement dollar investment. The insight is worthless if you can not predict whether the investment will pay back.

Instead, those decisions are left to genius and expert judgments.

SONOS used the causal machine learning model to predict the impact on NPS and the impact of NPS on churn and sales. The brand could understand that the investment should instead be made on another topic. 

As insights professionals, we are by nature too much self-focused. 

We believe great insights have value on their own, and the right decision is just a consequence.

But if we do not apply the same rigor in which the insight was born to the whole decision-making process, all insights are wasted

(btw this is the reason why we at CX.AI implemented the ADIM functionality – read here more.

# # #

What are your top 3 learnings of the year? 

Love to hear from you! Reach out over email or LinkedIn.

Cheers, 

Frank (LinkedIn | Email)

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

The 5 Deadliest Mistakes in Business Decision Making

The 5 Deadliest Mistakes in Business Decision Making

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: December 15, 2021 * 7 min read

We all know that decision-makers fight with many cognitive biases. For some reason, we believe just “the other guy” has a problem with this. What we believe feels so true. It turns out that if you know what goes wrong in your thinking, then you can circumvent its downsides.

The 5 deadliest mistakes are:

#1 Use your gut and common sense when dealing with small or large likelihoods

The first mistake is the tendency of humans to underestimate small percentages and to overestimate the impact of large percentages

This is based on the work of “Daniel Kahnemann” (who is a winner of the Nobel Price in Economics for it) His work together with Tversky found the incapacity of humans to handle small and large percentages. It proves that we are loss averse (pay more for an unlikely loss (insurance)) and risk-taking (pay for an unlikely high gain (lottery effect)) at the same time.

Let´s take this: What happens if you are in price negotiations?

While negotiating, there is always a risk of not getting a deal. The typical decision of negotiators now is to lower the price to raise the likelihood of winning the deal. The phenomenon behind this is the (irrational) loss-aversion of the negotiator. 

Actually, our brain is quite bad with numbers. Do you know how much more dangerous is to drive a car than flying an airplane?

100,000 TIMES! 

Why are 42% of humans then afraid of flying but not to drive a car?

Be aware of the loss aversion and risk-seeking tendency of human. Instead, don’t give the decision to the gut. It is biased. Develop a decision calculus.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

#2 Believe that true risk can be measured in past data

The second mistake is a misconception of risk. Risk management is seen as a procedure for taking past data and calculating likelihoods from it. 

When done without software, we over or underestimate the likelihoods.

But the true mistakes happen in the belief that past data CAN measure future risks.

50% of the stock market changes of the past 50 years happened in 10 days. The financial crises in 2007 was so obvious – just after it happened. 

Look at the famous “Kodak” or “Nokia” cases. Things are happening where you can´t think of.  

This is the risk. A true risk is something unknown – not expected. 

It is easy to protect against threats that happened in the past. Because of this: Its not a risk anymore.

Unexpected shocks instead can be managed to become anti-fragile and robust against unknown challenges. Any living creature is made that way. If you loose an eye, an ear, a finger or a lung-wing, you still can survive. 

If you need to run 10 miles every day, you get stronger or more robust. 

This is what businesses need to do to prevent risk—becoming that robust and strong to withstand any weather.

#3 Considering selected facts and case studies as proof

The third mistake is all about case studies. How can you convince decision-makers? Sure, use case studies.

To prove a theory, it makes sense to provide evidence, give facts, and show case studies.

The problem: anyone can cherry-pick those facts and case-studies that fit the theory. If you believe in facts, you will likely become a victim of snake-oil storytellers.

You see it in the actual vaccination debate – both sides -pro and contra vaxers are showing examples – people died because of the virus or the vaccine. If you see victims laying in a hospital or doctors fighting for life’s, you often don’t need a second “case study”. 

It is dangerous to use singular cases to prove a hypothesis.

Instead, it takes a validation on a larger sample that is representatively sampled.

Machine learning to find out.

Join the World's #1 "CX Analytics Masters" Course

#4 Being unaware that your world view is biased.

The fourth mistake is called the “Truman Show Synonym”. It can be described as the tendency of human to overestimate their own opinion. Science refers is as the confirmation bias.

People are trying to search for examples and specific data. In this search our unconscious brain brings information to our attention that are “relevant” to us (Cocktail party syndrome). 

Relevant is everything that is in our favor or supports our own theories. It validates your existing believe and makes it even stronger the more you inform yourself – without being aware of the effect.

As a result, humans are basically a “Truman” in its own show. The real world is different. 

Humbleness about your own opinion can be useful, because at the end you can manage your future more successfully if you know the truth – not just you feel great about your opinion.

#5 Infer Causality from Correlation

The fifth mistake is the famous correlation. Humans are growing up by using the methodologies of correlation. In many cases, this works perfectly well. 

If you have a nail and hit it with a hammer – there is one cause you can even control by yourself. As a result, you see the impact right away. Correlation proves causality.

In this case, where you have a limited number of causes and you see the results shortly after the cause, the correlation is a perfect methodology for finding out what works and what not – Try and Error!

Unfortunately, the business world is different. There are plenty of important causes and context variables. Even worse,  business decisions can take a long time until they show impacts. 

Learning about cause and effect in these circumstances takes data about drivers, context, mediators, and outcomes. And it takes a causal modeling analysis

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

Your pathway to better decision making

No matter what you do in marketing and sales, if your assumptions and insights are biased all your work, strategies, tactics, and implementation work can be wasted.

Wise business leaders know the cognitive biases and this is what they do:

#1 – Trust a decision calculus, not common sense when treating low (and high) likelihoods 

#2 – Know that the true risk are threads you are not aware of (Black Swan effect)

#3 – Avoid case studies and selective facts and seek for analyzing representatively sampled sets of facts 

#4 – Review your information seeking process and actively seek challenging theories

#5 – Avoid concluding from correlations and instead aim to perform causal modeling – most practically use Causal Machine Learning 

There is an emerging technology readily available and already intensively tested. It provides a solution to those challenges: Causal Machine Learning and Causal AI.

It requires a causal mindset to make use of it. It requires you to understand that everything decision-makers are looking for is causal insights—the invisible link between actions and outcomes.

What are your thoughts on this? 

Do you want to engage in an exchange? Reach out, and let’s meet on a virtual coffee chat: book your spot here.

 

Cheers, 

Frank (connect here)

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How CMOs Should Lead Data Science

How CMOs Should Lead Data Science

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: November 30, 2021 * 9 min read

Data Science is seen as the new magician in enterprises. Business leaders think they just need to shout into the basement where “hordes” of data scientists are sitting, and soon after, these spit out the magic formula by reading the crystal ball of AI. Silo thinking like this is neither useful nor needed. Having learned the proper framework, a business leader can win any discussion with data scientists using their holistic intelligence that data science does not have.
The Problem With Data Science

There are unlimited ways to deal with data. There are even endless ways to set up a neural network it’s easy to get hung up on complexity. Actually, most data scientists hang up themselves in “local minima” – this is data science slang for a “sub-optimal solution”.

Lost in complexity, it’s easy to lose business outcomes out of sight.

Only if business leaders know what they really need, they can manage data science wisely.

These are the three challenges on which both need to become clear about

  1. Aligning on the difference between data and insights. Facts and data are used as a synonym for truth.

  2. Understanding how to gain insights that work. The universe of modeling techniques is infinite. It needs a clear Northstar to get insights that drive business outcomes.

  3. Setting requirements for data science methods: Once you know how to build an analysis approach that drives results, it should be clear which criteria the actual modeling technique needs to comply to.

Here we go.

Introducing the Starway of Truth concept to understand the difference between data, facts, and truth

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

Stairway of Truth

The stairway of truth starts where most people believe ends: Facts.

     1. Facts

If you see that a plane has crashed in the news, you learn one thing: it’s dangerous to fly. 42% of people are anxious about flying, while 2% have a clinical disorder.

This is a fact but not the truth. Flying is by a factor 100.000 safer than driving a car.

When US bombers came back in world war two, the army analyzed where the bombers got hit and applied ammunition.

They acted on facts, but the initiative was useless because the analysis did not uncover the truth.

It’s impossible to understand why bombers do not come back without analyzing those who don’t come back.

In the same way, it’s impossible to understand why customers churn if you only analyze churners. It could be that churners and customers complain about the same thing.

     2. Sample

What it takes instead is always a representative sample selection of facts. Facts are just particular snapshots from the truth, like a pixel out of a picture. It might be true. But it, alone, is meaningless.

In 1936, one of the most extensive poll surveys made it to the news. 2.4 million US Americans had been surveyed over the telephone. The prediction was overwhelming. Roosevelt will be the apparent loser with only 40% of the votes.

Finally, Roosevelt won with nearly 60%. How could polling fail so miserably?

The sample was not representative. At that time, telephone owners had more fiscal means. This correlated with the likelihood to vote for democrats.

Just a sample of pixels can paint a picture. But if pixels are drawn just from one side of the picture, you are likely to read a different “truth”.

      3. Correlation

The journey to truth does not end at a well-sampled “picture”. Why? Ask yourself, what do business leaders really want to learn?

What’s more interesting?: “What is your precise market share?” or “how can you increase market share?”

The first question asks for an aggregated picture from facts.

The second asks for an invisible insight that must be inferred from facts. It is the question of what causes outcomes.

“Age correlates with buying lottery tickets” – From this correlation, many lottery businesses still conclude today that older people are more receptive to playing the lottery.

The intuitive method of learning on causes is the correlation. It is what humans do day in day out. It works well in environments where effects are following shortly after the cause and when at the same time, there is just one cause that is changing.

It often works well in engineering, craftsmanship, and administrations.

It works miserably for anything complex. Marketing is complex, Sales is complex, HR is complex.

“Complex” means that many things influence results. Even worse, the effects are heavily time lags.

Back to the lottery. The truth is that younger people are more likely to start buying lottery tickets. Why then are older more often playing? Purchasing a lottery ticket is a habit. Habits form over time (=age). This is amplified with the experience of winning, which again is a function of time.

     4. Modeling

To fight spurious correlation, science developed multivariate modeling. The simplest form is a multivariate regression.

The idea: if many things influence at the same time, you need to look at all possible drivers at the same time to single out the individual contribution.

The limitations of conventional multivariate statistical methods are that it relies on rigid assumptions, such as

  • No Interactions: All drivers are independent of each other

  • Linearities: The more, the better

  • All drivers have the same distribution

Sure, many advancements had been developed, but always you needed to know the specificity upfront. You need to know what kind of nonlinearity are what is interacting with what and how.

No surprise that this turned out to be highly impractical. Businesses get challenges and need to solve them within weeks, not years.

     5. Causal Modeling

It turns out that the majority of business questions concern the causes of success.

When you want to drive business impact, you need to search for causal truth. Science, Academia, Statistics, and Data Science shy away from “causality” like a cat from freshwater.

Because you can not finally prove causality, they feel safer neglecting it. They can ignore it as they are not measured with business impact.

All conventional modeling shares a further fundamental flaw: the belief in the input-output logic. This only measures direct, not indirect, causal impact (best case).

Causal modeling uses a network of effects, not just input versus output. Further, it provides methods to test the causal direction.

     6. Causal AI

Causal AI is now combining Causal Modeling with Machine Learning. This has huge consequence on the power of the insights. It eliminates all those limitations that modeling always had.

Join the World's #1 "CX Analytics Masters" Course

Causal Insights Triangle

Equiped with machine learning, causal modeling becomes much more manageable and thus more practical.

The causal insights triangle gives you the framework for how you build your model.

Let’s go thru each component and illustrate it with marketing mix modeling (MMM).

First, define what measures the desired outcomes. In MMM this would be the sales figures per day or week.

Then collect on a blank sheet what drives and influences those outcomes. This list can be bucket into three parts.

First, there are the drivers. In MMM this would be the spendings per marketing channel in a particular day or week. Drivers are variables that are independent of other variables in the set.

Second, are the mediators. In MMM, this would be the brand awareness or share of consideration set of a given week or month.

Mediators are variables that drive outcomes but also are influenced by drivers.

Third, are context variables. In MMM this would be the power of the creative to drive impact, the type of creative, the region at hand, the demographics profile of the region at hand, etc.

Context variables are moderating context factors that you may not be able to influence but impact how the model works.

The good thing is that you don’t need to know how those variables influence others. You can even use any type of data, as long as it has numbers.

With the selection of data and the categorization into the 4 buckets, you have infused your prior knowledge about causal directions into the model.

The rest is up for causal machine learning to find out.

Causal AI Weel

The concept of the Causal AI Weel illustrates why it’s not enough to use conventional causal modeling techniques.

Three quick examples illustrate the need:

Unknown Nonlinearity: A pharma company found it drives sales to give product samples to physicians. But with causal machine learning, we found that providing too many samples will REDUCE sales. After the fact: of course too many samples substitute prescriptions.

Unknown Interactions: In CPG, purchase intention for new products correlates zero with future success. But with causal machine learning, we found that it takes five other success factors to be true at the SAME time.

Unknown Confounders: Many companies see that the NPS rating correlates zero with future fiscal impact or churn. At an insurance brand, this was because more critical customers segments (have perse lower ratings) will buy even more, ones they are loyal. This unlying effect can be considered when integrating segments or demographic information into the modeling.

Here is in a nutshell how machine learning separates from conventional modeling:

Hold a book into the room. The floors’ two-dimension symbolize your two drivers, and the height of each point of the book stands for the outcome. The steepness of the two angles of the book represents the parameters of your modeling. The process tries to fits the book’s plane into space. This attempts to approximate the data points that are like stars in the room’s 3-dimensional space.

Machine learning does the same, but it uses a flexible book or hyperplane. It’s like a tissue or a book made of kneading. It can be formed in a way to match the data points better.

The fewer data points you have, the more rigid and less flexible it gets to avoid overfitting.

This flexibility solves a lot of problems.

Legacy techniques instead are restricted by

  • Linearity assumptions “the more, the better”

  • Independence assumption “any TV ad always has the same impact”

  • Testing theory “we assume all assumptions are true and just fit parameters and hope for good fit metrics.” instead of finding new hypothesis in data.

With machine learning now, we can explore previously unknown nonlinearities and previously unknown interactions.

We can even now combine ANY quantitative data into one model. This capability eliminates confounder risks and will improve the likelihood that findings are indeed causal.

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

This is How CMOs Win Any Discussion with Data Science

As a marketing leader, you don’t need to know the methodological solution. But you need to ask the right questions and be careful not to get wrapped into plausible-sounding storytelling.

Here are the questions you can ask:

Someone presents you:

 

You can ask

 

Facts as proof for something

How do we know these facts are representative of the truth and not just exceptions?

Descriptive statistics about a representative sample of facts

How do we know that our conclusion will really influence desired outcomes?

Comparing winner vs. losers or other kinds of correlation analysis

How do you know this is not a spurious correlation?

Driver analysis outcome

How can we make the model more realistic esp. by considering indirect effects – such as brand, changing attitudes and other long-term effects?

Driver or SEM model

How can we avoid confounder risk? (if external context influences drivers AND outcomes, it will screw results)

Driver, SEM, Bayesian nets results

How can we make sure results are not screwed by things we do not know of, such as nonlinearities or interactions?

Responds: “that’s not possible.”

I read this article from Frank from Success Drivers. He wrote it is possible. Shall we ask him?

As a marketing leader, you have the responsibility. Data scientists are just consultants. When there is no impact, they do not care much. You do.

Like naïve patience that runs at risk of getting unnecessary treatments. It’s not the doctor’s life that is at stake.

In a complex world, it’s not enough to check results based on plausibility. It’s easy to build a plausible story from random data. Plausibility is simply not a good indicator of truth.

Instead, challenge data science and challenge marketing science to “think causal”. Challenge them to use Machine Learning to help you learn from data instead of just testing made-up hypotheses.

Here is a good read for you if the topic interests you further. We send you the hard copy book free of charge if you are an enterprise client.

Stay curious and …

… reach out to me with questions

Frank 

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Compete on Insights

Compete on Insights

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: November 15, 2021 * 12 min read

The success of strategic initiatives relies on three things: the analysis, a solution based on those insights, and implementation of the strategy. It can fail at any point in the cascade. But the insights are the Achilles’ heel.

There is a twofold irony: First, all resulting investments are wasted if the insights are wrong. Second, you can see if an implementation fails and if a strategy lacks consistency and rigor. But you can NOT “see” whether an insight is valid. Any insight can be turned into a plausible story.

Most CEOs and CMOs are not aware of this. It is my observation that this irony is the main reason for stagnation and the bottleneck for growth.

Let me give you some examples of typical fails I see enterprises do.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

FAIL #1 – Believing What Customers Say

👉 EXAMPLE: CUSTOMER EXPERIENCE INSIGHTS

Customer Experience research has a long tradition, and the latest trend is the simplification using the NPS framework. One rating and an open-ended question will measure the level of loyalty and reasons why.

Really? 

It turns out that the most often mentioned topics in open-ended, unstructured feedback in most cases are not the most important ones.

Actually, frequency does not correlate with importance AT ALL.

But this violates the fundamental assumptions of 91% of all CX measurement programs.

The consequence is that companies prioritize topics that they should not.

Even worse, such feedback is forwarded to the customer-facing employees. By reading this, those employees learn that the most frequent topics are the most important ones.

Wrong knowledge percolates throughout the company.

How to fix this? Below is more.

FAIL #2 – Focusing on Outcomes

👉 EXAMPLE: CREATIVE INSIGHTS

Advertizing research has two parts. Part one is how to spend ad budgets. Part two is how to optimize the creative.

Independent meta-studies from ARF have shown that at least 70% of ad impact can be attributed to creative quality.

Nearly all efforts of advertizers to assess creatives are some kind of measuring technique. 

There is the classic copy test that asks for the responses and purchases intent after exposures. And there are highly elaborated Neuroscientific and Biofeedback procedures. All of it can make a lot of sense and can be useful.

But what the industry is not appreciating is that FACT does not equal TRUTH. 

Measurement just produces facts. What brands, however, need to know is WHAT TO DO to achieve a particular outcome. This if-then link is a question about causality between actions and results. It can NOT be measured. 

It can only be inferred. The science behind this is called “causal analysis”.

The hunt for success strategies is handed to storytellers and creative “geniuses” instead of proper analysis. 

How to fix this? Below is more.

Join the World's #1 "CX Analytics Masters" Course

FAIL #3 – Over-Simplification:

👉 EXAMPLE: NEW PRODUCT INSIGHTS

95% of CPG (FMCG) product launches do not survive the first year. This is despite brands invest billions in customer research. How can this be?

Yes, it isn’t easy. A product needs to appeal to customers before and after buying it. It needs distribution, shopper marketing, a strong brand to be recognized, and a fair price to get a yes.

We looked at all CPG grocery product launches of a year with data for all those components. Each component hardly correlates with success. The correlation of purchase intent with survival is nearly ZERO. Same with brand, product use scoring and even pricing.

Proper causal machine learning revealed that the long-tailed distribution of launch success could only be explained by one phenomenon: All success components do not add up to success; they multiply, they depend on each other.

One failure in one discipline, and you are out.

With the available data today, future blockbuster products can be predicted with 80 to 90 percent hit rate. 

Not only that. Causal machine learning can give hints on how to architect those blockbusters.

How exactly? Below is more.

FAIL #4 – Gut Over Rigor:

👉FAILS IN PRICING

Pricing today either uses gut or rigor. But it takes both to make a difference.

Most common is an explicit question to the customer (namely Van Westendorp Scale or Gabor Granger). It delivers a plausible indicator for an optimal price range. But plausibility is not a good validation of truth. 

Not only does it provide a price-demand curve, nor does it consider margins. As such, it is useless. It speaks to the gut of the decision-maker but fails to deliver rigor and validity.

The opposite is true for Conjoint Measurement. Based on multiple complex choice tasks, an algorithm can derive the utility of each product feature. Based on a market simulation, it then produces a price demand curve.

The approach falls flat because of very different reasons. Mostly the modeling assumptions turn out to be unrealistic. Often consumers select the “wait & see” option that conjoint models typically miss out – and with this – by far overestimate market demand.

The other downside, conjoint is so complex and costly that it is just applied for handpicked products.

Each sizable car manufacturer makes billions of revenue with parts that have never been empirically priced. The same is true for most consumer brands with many SKUs.

How to fix this? Below is more on a solution called Price.AI.

that.

FAIL #5 – Linear Mindset Instead Of Causal Thinking

👉 EXAMPLE: SALES, MARKETING, MEDIA MIX MODELING

Large brands spend huge marketing budgets and so want to know where to invest.

This question focuses mostly on the short-term impact of advertising, but the bulk of the impact is due to brand building and can only be found in the long run.

Ads of strong brands show larger short-term impacts, no matter where you advertise and how you advertise. Ads of weak brands can show no short or medium-term impact but the huge long-term impact by building brand equity. 

In truth, long and short-term effects must be modeled in one go to avoid misattribution. 

On top of this, the world is even more complicated. Complexity is not managed by just hiring the market-leading MMM vendor. It is by involving those vendors with the best technology.

Example? A drug brand had product sampling as one promotion channel. Any legacy modeling (even those who capture nonlinearity) had found “the more, the better”. Only the proper method was flexible enough to see that -of cause- too many samples will substitute product prescriptions.

Channels also amplify each other. Some only compliment each other. Mathematicians call this “Interactions”. As those interactions are unknown and invisible, it takes flexible learning machines to unearth them.

How to fix this? Below is more.

 

👉 EXAMPLE: Price promotion effect

It is a long-known trap. Still, most companies fall for it.

Price promotions clearly show sales uplifts. That’s a fact.

This sales uplift ALWAYS is a composition of:

  1. Customers who bought because of this promotion
  2. Customers who would have purchased it anyway but later -at a higher price
  3. Customers who would have bought it earlier but waited to know the promotion would come (Black Friday effect)

If you do not quantify all three, you do NOT understand whether or not the price promotion made sense.

How to fix this? Below is more.

 

👉 EXAMPLE: BRAND POSITIONING INSIGHTS

My last example about the negative impact of linear mindset as opposed to causal thinking is T-Mobile USA.

2013 the brand had a huge relaunch in the USA, attacking its huge competitors AT&R and Verizon. It worked, but T-Mobile did not know why.

Each feature they introduced could be even easily copied.

A revolutionary methodology found something hidden. All features were not the direct causal reason, but the perfect reasoning of T-Mobiles Robin Hood story. This story (being the Uncarrier) was attracting people.

Fast forward other features had been implemented over time to nurture this winning positioning. 

The impact has become world-famous. Today T-Mobile is on par with AT&T and Verizon with exceptional profitability and +600% market evaluation while AT&T declined.

How to extract your winning market factors? Below is more.

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

The Solution: A Causal Insight Mindset

The solution to all those examples is not simply “better tech”. It takes a problem awareness to see what is “better”.

With this, the ultimate challenge in enterprises is cultivating an ongoing discussion on causality and how to read the truth from data.

Everyone believes he can read the truth by looking at data. Our gut fools us most of the time. We do nothing about it.

A company that cultivates a mindset of humbleness and awareness about the art it takes to read the truth from data will be able to single out the best tech.

It takes leadership and education to make such a culture happen.

The education piece is obvious. Every manager needs to learn the 101 of gaining the truth from data. Such a training piece builds on simple insights:

Every business decision builds on this CAUSAL assumption: Action X will lead to Outcome Y. 

As such, we MUST apply “causal analysis”. This is either controlled experiments or causal modeling.

Period.

Causal Modeling In Action

Let’s review approaches to learn about causal impacts

  • Comparing facts (e.g. Male earn 20% more than female) – Is gender truly driving the income difference? Maybe, maybe not. This is a binary correlation analysis and has the same drawbacks as standard correlation analysis: Spurious correlations.

  • Correlation – neglects all other factors of being a reason for the outcome 

  • Regression – now considers other factors but fails to model indirect, nonlinear, and interaction effects

  • SEM/PLS – now this also considers indirect effects but fails to model indirect, nonlinear, and interaction effects. On top of this, it fails to provide exploration features, something elementary for business applications.

  • Bayesian Nets – now explores causal directions too, but fails to model nonlinear and interaction effects.

  • USM (Causal Machine Learning) – now is the most complete framework for business applications (available as the software NEUSREL)

Here is how USM and Causal Machine Learning can help your business to compete on insights.

Causal Machine Learning IN ACTION

👉EXAMPLE: CUSTOMER EXPERIENCE INSIGHTS

Every company has it – customer feedback like an NPS rating or stars on Amazon. Then most ask an open-ended question why. That’s all that you need.

First, make sure to categorize feedback into the topics mentioned as granular as possible. NLP deep learning systems can help to scale this.

Causal Machine Learning can unfold its magic. The categorized feedback comes as binary variables. Text AI also produces sentiment information that measures the totality of language. Also, context information can serve as additional predictors.

Causal Machine Learning can take care of so-called intermediary variables too. Besides the sentiment, a category like “great service” is such an intermediary variable as it is driven by more specific ones like “friendliness”. 

The model then can find out that friendliness is the key behind “great service”. A conventional driver analysis would have totally missed the importance of friendliness because categories are not independent. 

On average, Causal Machine Learning doubles the explanatory power of conventional driver analysis. This means it reduces the risks of wrong decisions by 50%.

The cx-ai.com is a solution that leverages Causal Machine Learning, provides CX and fiscal impact predictions as well as an ROI decision-making framework

 

👉 EXAMPLE: CREATIVE INSIGHTS

To understand how a commercial will succeed, it is not enough to measure how well it performs (this is the focus of copytesting today). Instead, you also need to measure what it does.

In a large syndicated study, we annotated (categorized) over 600 spots of 6 product categorize to describe what the TV spots actually are doing. 

Do they use a spokesperson? Do they use a problem-solution framework? Does it use a song that corresponds to the acting message? We coded the technical properties of a spot.

Then we coded the emotional message each spot was making. Each spot can be categorized into one of the dozens of topics like “it tastes good”, “it can be trusted”, “good for the family”, etc.

This data is then merged with copy testing data. With this data, Causa Machine Learning can now understand which tactics and which emotional messages work in your category.

We called the approach Causal.AI. It can not invent an actual creative conception. But it gives clear guiding rails about which strategies will work and which don’t.

 

👉 EXAMPLE: NEW PRODUCT INSIGHTS

When launching a new product, much can go wrong. Distribution, brand, packaging, promotion, first product experience, pricing – all this needs to be good enough. It’s a success chain at which the weakest link determines winning or losing.

Each step on its own as well as all together is an application for causal machine learning.

Before this, typically, you want to test a product concept and learn WHY it is not crushing the crowd. 

Test the concept with implicit response measure and then get feedback on the classical eight dimensions of product adoption. It will tell you what consumers think about the product but not (yet) why they don’t buy it.

It takes a causal machine learning model to measure how important those dimensions are. 

We ran the process for a new speaker concept. We learned the most crucial thing for marketers to look at was communicating why it was different from (uniqueness dimension) than the competition. 

Each product has its topics. It could be ease of use, appeal, utility, certainty, trust, or compatibility with the consumers’ lives.

Applying USM (causal machine learning) is essential to translate data into predictive insights that work.

 

👉EXAMPLE PRICING

Price.AI is a methodology that lends methods from psychology to measure unconscious attitudes in lightspeed. 

It tricks conscious minds by measuring reaction time on whether or not the shown price is fair or risky or attractive or with “want to buy” and so forth. AI then is trained to predict the willingness to buy. 

This AI helps to consider the attribute’s nonlinear link to purchase and lowers the required sample size.

In the end, the method delivers an accurate price demand function. It can be retrieved in an automated process with as low as 50 respondents. As such, pricing becomes not only precise but also scalable.

 

👉 EXAMPLE: SALES, MARKETING, MEDIA MIX MODELING

A MMM model based on causal machine learning solves all problems mentioned above. 

It automatically models channel interactions and nonlinear effects, especially those nobody is aware of.

Most importantly, it considers the indirect effect. The brand-building effect is an indirect causal effect. Any MMM model should include indicators of brand strength.

It also considers the biggest context and confounding factor: the creative quality. There is no ad impact if the ad is bad, no matter how much money you pour into the channel.

You don’t have data for that? If you do copy testing you do. Nowadays, you can even buy such information or teach a deep-learning AI that can predict it.

This are 5 questions you should challenge your MMM vendor with:

https://www.success-drivers.com/what-can-marketing-mix-modelling-provide/

 

👉 EXAMPLE: Price promotion effect

Understanding the impact of price promotion is a natural outcome of a holistic sales model. 

Causal machine learning enables holistic models with ease by adding predictive power at the same time.

Conceptually, sales must be modeled as an outcome of the price at the time (=price effect), the price of the past (=early purchase effect), the price of the future (=promotion anticipation effect), and all other circumstances. 

If the price of the future or the past predicts sales of today, we have the prove that purchases just shifted due to pricing)

 

👉 EXAMPLE: BRAND POSITIONING INSIGHTS

Brand positioning is a vast field. Depending on the approach, you may actually measure different things. 

No matter what you are measuring, these data can be grouped into final outcomes (e.g., purchase intention), intermediate outcomes (e.g., consideration, awareness, liking, etc.), drivers (image items, features, feature perceptions, etc.), and context (demographics, product usage, psychography, etc.).

The causal directions between variables are known for 95% of the paths based on marketing science. Causal direction tests can test the rest. This structure guides the model building.

Causal Machine Learning then does the legwork. 

The whole details of the T-Mobile case can be found here.

Winning With Better Insights

No matter what you do in marketing and sales, if your assumptions and insights about the customers are biased ….

….all your work, strategies, tactics, implementation work, and ad spending will be wasted.

This is why there is nothing more important than getting insights right from the start.

The most common misconceptions and misbelieves are these 5 fails:

FAIL #1 – BELIEVING WHAT CUSTOMERS SAY 

FAIL #2 – FOCUSSING ON OUTCOMES 

FAIL #3 – OVER-SIMPLIFICATION 

FAIL #4 – GUT OVER RIGOR 

FAIL #5 – LINEAR MINDSET INSTEAD OF CAUSAL THINKING 

There is an emerging technology readily available and already intensively tested. It provides a solution to those challenges: Causal Machine Learning, Causal AI, or USM.

It requires a causal mindset. It requires you to understand that everything relevant that decision-makers are looking for is causal insights. 

What are your thoughts on this? 

Do you want to engage in an exchange? Reach out, and let’s meet on a virtual coffee chat: book your spot here.

Cheers, 

Frank (connect here)

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Escape from La La Land

Escape from La La Land

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: October 29, 2021 * 9 min read

It’s all around us: Facts. They make us feel safe. Like Truman felt safe in his fake world. Yes, the real world is different than we think because ‘biased sampled facts lead to wrong knowledge’. In this article, I like to illustrate that nearly everything that we learn in business and in private life is severely biased. Everything we hear from customers, employees, and the market. The good news is: We can do something about it.

It belongs to the fundamentals that every insights professional knows. Biased samples can lead to biased results. Nothing new.

Nothing new? When Hiram Bingham discovered Machu Picchu in 1911, it was not new either. The locals knew this spot well, but they underestimated the role and importance of this spot.

Same with ‘filter bias’. It is not only all around us. It has a -sometimes- devastating effect on what we believe about the world.

Take social media. The term ‘filter bubble’ or ‘echo chamber’ are widely known for the impact that a filter can have. Social feed algorithms learn which content leads to engagement. I then only show the content that is engaging for you. 

Engaging content is most likely in line with your opinion, and it is most likely negative and alarming. The result is a polarization of unbalanced views and a fearful worldview.

In this LinkedIn article, I am describing the background in more detail, and I am proposing how feed-algorithms can be optimized to stop filter bubbles.

But for now, I like to shed light on the impact to businesses when underestimating the filter bias.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

Churner feedback

What would happen if you ask non-churner a similar question like “what needs to be improved”. What if they mention the same topics like churners? Would you still believe what churners most often mention as a reason is the true motivator for churn?

Churner feedback provides you a bias feedback by design. It builds on the assumption that churners know and articulate the true reason for churn. 

This assumption is more or less broken.

It can only be validated with an unbiased sample of churners and non-churners.

Customer experience feedback

Most companies do it. Asking an NPS or Satisfaction question and then asking WHY. Then companies assume what customers say is unbiased and can be taking as-is. 

They assume customers are willing and able to articulate why they are loyal reliably.

This assumption is broken too.

Human brains are notoriously lazy. If there is no strong incentive, customers’ brains spit out instant associations instead of well-thought-out replies. 

This is why restaurant customers most often mention “great taste”, insurance customer “great service” and speaker customer “great sound”.

It’s often an instant reaction without deeper rational processing. 

The process and context of interviewing itself provide a bias that can turn results upside down. Here is an article that goes deeper on this and how to unbias results.

Inner Loop of CX feedback

The “feed” of customer feedback is biased like a Facebook feed. 

Most companies are sending this feed to the frontline. It should enable the frontline to learn what customers think.

But it does not do the job. Instead, the frontline will learn something else. It will learn what customers spit out by instinct when asked. But not necessarily what will make them happy or more loyal.

CX feedback needs a causal analysis (or some kind of driver analysis) to judge its importance reliably. This article discusses how to solve the issue.

Join the World's #1 "CX Analytics Masters" Course

Public Ratings

Public ratings on Amazon, Google Maps, Google Play, Trustpilot, Capterra, and much more are a great and free source of feedback for many businesses and local branches.

Yes, sure, it is biased in multiple ways

  1. Extremes: It takes some frustration or excitement to take the time and give unasked feedback. Certainly, the concept has a bias towards the extremes

  2. Fake reviews: Competitors have an interest in bombarding your reputation, and some businesses manage their reviews themselves in their favor.

  3. Duplicates: Providers like Bazaarvoice are collecting feedback from different ecom-sites to repost them automatically (if positive) to other ecom-sites. It leads to huge duplicates and bias in topic frequencies.

This means the sampling of rating feedback is largely biased.

What is not biased by the 3 filters is the relationship between rating and explanation. It enables you to understand still the impact of topics using causal or driver analysis.

Social listening

Social listing uses public conversions to measure what people talk about to indicate what is going on. 

But Social data is even more biased. 

80% of social conversions are of non-human origin. Those conversations that are human, are biased by the feed algorithm. This algorithm determines the reach of posting. Some opinions may be predicted to be less engaging and thus will get fewer eyeballs.

Compared to the Rating feedback, it’s harder to debias social feedback, because the reference is missing. 

One approach is to use existing brand tracker and machine learning algorithms to find the unknown link between social conversations (as an aggregate per time and region) onto brand tracking results. This website gives more details.

Leadership

Are you managing a team? Certainly, you ask for feedback regularly. Did you ever realize how biased this is?

I made this observation myself in my previous life as a Marketing Director. My team reported to me what great things they did and what mistakes the other departments were making. 

The information flow to upper management is like a Facebook feed. It is optimized for engagement. It’s certainly not optimized for you to learn the truth.

A biased feed of facts will inevitably result in wrong opinions.

Managers without a direct line to the front line (or other elaborate ways of truth discovery) will entirely lose grounding. 

I know many people who would support this with anecdotal evidence. I am sure you do too.

The phenomenon is also one reason why companies like McDonald’s have Upper Managements work in the front line regularly.

Besides this, it takes an Employee Experience Feedback program that deploys the same rigor in analytics and understands the true impact of topics out of unstructured feedback.

Keep Yourself Updated

On the Latest Indepth Thought-Leadership Articles From Frank Buckler

This Is Your Exit From La La Land

The most important step has already been done: you read this article. Being aware of the phenomenon will give you healthy doubts and awareness for the biases all around us.

Practical ways to tame the bias effects are typically modeling analysis work. It uses biased feedback as predictors and objective outcomes as a benchmark. Now, with machine learning, we can find the link between input and output. We can find this link that is unknown because the bias is unknown.

If you want to dive into more cutting-edge CX thinking, the CX Analytics Masters Course is for you. It’s free for enterprise insights professionals. If you are looking to discuss some of the advanced technics mentioned above with an expert, reach out at www.cx-ai.com 

Now I have a question: Was this article helpful?  Please DM me directly with any comments or questions 

Frank

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How To Avoid Unstable CX Scores?

How To Avoid Unstable CX Scores?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: October 19, 2021 * 14 min read

With the help of a CX score, you can learn the percentage of customers who are pleased with your services or products and those who are not. It also helps you get valuable insights into the areas where you can exceed customer expectations and enhance customer retention rates.

A customer experience score helps you get the average satisfaction score of the customer. For instance, in an automated survey, customers rate their specific experiences like a service call or a product purchase on the scale of “very satisfied” or “not satisfied at all.” The CX scores can fluctuate due to several reasons. Let’s discuss those reasons first.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

What Makes CX Scores Fluctuating?

Following are the reasons due to which the CX scores fluctuate.

  • Market dynamics – The CX scores fluctuate as there is a movement in your customer experience either through the competition or what is happening in your organization. There is a natural dynamic in the market, so the CX score does not stay stable.

  • Sample Size – The noise that occurs due to the fluctuating CX scores can be huge for several reasons. The first reason is the sample size. Many people think that the CX score can become stable if the sample size is doubled. However, it is not the case. To get double stability, you need four times more i-e., 4X sample size. This way, you will reduce your variance to half.

The sample size is NOT the merit and solution to everything. Instead, we need to look at:

  • What can we do with the sample size?

  • How can we extract more information from the sample size?

  • Ratio Score – One of the noises and variations comes from the way we compute the score, especially NPS that is a ratio score and not the mean. It takes ratios of promoters, subtracts the ratios of detractors, and ratios the same way as the TOP2 boxes are very fragile towards low sample sizes. So, they have a high variance. Why is it? 

Imagine you have a sample size of a hundred, and the typical share of promoters is just 5%. So, you will expect five out of a hundred to fall into the promoters. If two are missing for some reason, you suddenly have a 40% less score in the promoter piece. 

If the same happens with the detractor, many more people are in the neutral zone. So, there will be a variation due to this small change. The scores will be largely different. As the ratio scores are always more fluctuating, so the NPS is an unstable score.

  • Weighting – Another factor that makes the CX score even worse is weighting. Many companies take their fluent or high-value customers and overweight them by the factor of 10 (let’s say). But if you have a low sample size, it is another reason for the fluctuation.

In short, weighting multiplies the ratio effect and amplifies the whole measurement problem of NPS.

What Are The Simple Tactics To Mitigate The Effect?

Let’s discuss some simple tactics to mitigate the effect. They are as:

  • Use Fuzzy Logic – Instead of computing the NPS, the standard way is to use fuzzy logic. It is not complicated. If you understand everything, it’s simple.

In the below chart, you can see the different scores of NPS. 

The blue points indicate how you treated the different NPS scores. If someone is six or lower (detractor), you treat them as minus a hundred percent NPS because across all the customers, you set an average and get the NPS score.

On the other hand, you treat promoters as hundred percent NPS because if everyone is a promoter, the average is hundred percent. In the seven lines, you treat them as zero. That’s basically how the NPS score calculation treats responses. It’s a binary thing i-e., bad, and good. You can use fuzzy logic and say, for instance, age is not the same as seven. It is positive neutral. 

In the above chart, seven is a negative neutral, and nine is positive but not a hundred percent, maybe 70 or so. You may ask how to know which value to take. You can assume some facts and can see if, on average, the same NPS score emerges across many different times or splits. You can try it out, and you will find those measurements that pretty much, on average, give the same NPS. But for this specific moment, it gives a different NPS because it acknowledges that seven is NOT eight, and your customers did not mean eight though you treat it the same way.

So, fuzzy logic makes more sense of what your customers are saying, but you still get an NPS score. 

  • Boost Sample – You can boost the sample. For instance, for the extreme weights or the high-value customers, you can try to boost the sample and reach more often out to them or reach a larger sample. So, you get more feedback from high-value customers. For instance, if something is double in value, you can reach out to double per person.

  • Moving Average – You can compute the moving average with the past value. Why? You may say you can not average out the new value with the old one. It would be a wrong value because neither the old nor the new value are the true values. Both are highly affected for low sample size by noise. You can filter out this noise by simply meaning them out.

  • Weighted Average –  It’s a simple technique, but if you want to use some other tactics, you can mean the noise with some benchmarks. If the general trend is upwards, there is a chance that the specific trend is also upwards.

  • Simulation Exercise – Anything you do should be simulated because you might say:

    “I can not compute the average from the old with the new value.”

    How to know that it will be better and not be worse. You only know if you do the simulation. Remember, your score is NOT the truth. It is highly fluctuating through biases, i-e., through a sample bias, through weighting, through measurement, etc.

Join the World's #1 "CX Analytics Masters" Course

What Is A Calibration Model?

Let’s discuss how to calibrate your KPIs using modeling. Typically it is advised to use machine learning as a modeling technique because it is more flexible, has higher predictive power, and assumes fewer assumptions. The idea is that you take your score and try to predict it. For instance, you can predict the NPS score of fluent customers in Germany. Then, you can have another split like retails in Switzerland. So, whatever splits you have, you can have the score. 

Predictors –  You try to predict which score you can expect if you know its predictors. Some of the predictors are as:

  • Score last term – The first predictor is the score of the last term. There is a strong autocorrelation of scores. If there is a high loyalty in the USA, it will be the case for the next period. There might be a change, but this is the biggest information needed to predict that the score is the score of the last term.

  • Score before the last term – You can also include the score before the last term and can find out when the last time the outlier occurred.

  • Score change of other segments – You can put the change of the score in other segments. The scores in other segments are widely different. Some are high, some are low, but it does not matter for this score. But if there is a typical change in other segments, it might be predictive for the score (predicted score) as shown below. Typically, the customer serves the same product to all the segments or all regions. 
  • Score change of other regions – If there is a new product or a new service initiative, it impacts too many segments or too many regions, and typically they are correlated. The score changes for not only the segments but also for the regions.

  • Sample Size – Low sample size means a high variation or high change from past to future.

  • Mean not score – You should try using this score instead of using the score of the last term. It is because you don’t want to report the mean of the NPS. After all, it is hard to interpret for decision-makers. But actually, it is a more stable score, and we can use machine learning to translate it into the future score. That’s the beauty of machine learning, as we don’t need to know how all the predictors interrelate. Often, there are strong interactions among the scores when conventional statistics do not work at all. It typically gives half of the predictive power, so we need to leverage machine learning here.

  • TWIN splits – Average, not all others but use most correlating TWIN splits. Instead of looking at the change of all other segments, you can find out which segments typically correlate with the existing segment. You need to use the correlation matrix and find out which split correlates with the other split. So, you find TWINS, and these TWINS are to be more predictive to each other.

  • Other indicators – There might be other indicators about the splits like sales number, churn, etc. These are the real-world numbers that are the indicators of what is happening in the real world for your customers. So, whatever numbers you have, they can go into the modeling, and machine learning will find out if they are useful or not. 

 

CAUTION:

Not use other items of the same survey.

When you do an NPS survey, you can have items like service product, pricing, etc. You can take the average score of the segment for quality service (let’s say) as a predictor for the final score. It is a good predictor, but it will still fool you. Why? 

Imagine you have a low sample size and have fifty respondents in one split. The main reason for the fluctuation is sampling, how they fall into the NPS pockets, and weighted. The sample bias, along with the weighting bias, applies to the items like quality, product service, etc. So, if you have some strange people in the sample who screw up your score, they will also be the reason who screw up these items. You see, we cannot predict the truth, and that’s what we want to know. 

So, the above caution will help predict the scores, but it will not help predict the ultimate truth because the score is again biased. Therefore, you need simulation for the calibration modeling to find out the truth.

Speed Training LM

SPEED-TRAINING: Reinvent Your CX Analytics and Win C-Suite by Storm

Crystal Clear CX-Insights with 4X Impact of Actions

60-minute Condensed Wisdom from The World's #1 CX Analytics Course

*Book your slot now, last few seats left*

How Calibration Needs Proof Through Simulation

Whether simple or ML-based, all calibration needs proof through sampling, and given below is the method you need to use.

You take a split that is large enough. If you don’t have large splits, you can take the whole sample. An example of a large split is retail customers in the USA. You can do sampling with this split. For instance, if you have a 5000 or 10,000 sample score, you are pretty close to the truth. Out of this thousand or more, you can subsample now like 100, 10, 20, 30, 50. You can take out some samples and calibrate them.

You try to make the sample score better and compare it with the overall score of the sample because you know it is pretty close to the truth. 

Consider an example below.

What you see on the left is the score change with and without calibration. Here the NPS score for a hundred or fewer samples variates. Typically it can vary between -20 and +20 for a low sample size. So, the calibration gets much more stable.

On the right, you can see the simulation result. The blue line indicates the sample size, and the orange line is the standard error or the deviation from the truth. There are jumps in the simulation chart if the sample size is below a hundred. Even if it is a hundred, it has a typical fluctuation of five. So, it depends on your weighting and score scheme. It can be quite high, and you wouldn’t expect that it’s so high. 

The orange line indicates the status quo, and the blue line is the calibrator. So, you learned if you have 20 or 30 respondents, the blue line can be as stable as 150. Now, you can report splits as low as 25 samples. 

You see that the calibrated line is always better than the actual line because the measurement is just a measurement. It’s an estimation and not the truth. So, we just want to calibrate it towards the truth, and the sampling exercise tells us whether or not we are on a good track.

In a Nutshell

So far, we discussed that scores built from the limited sample size strongly fluctuate around the truth. Part of the problem is the way we calculate the scores like ratio or weighting scores. Further, some easy fixes can make the score more stable. For instance, fuzzy logic can help mean out the actual versus last score.

But the most efficient and the most precise way is to use machine learning. It is the most powerful way to bring every score closer to the truth.

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How To Avoid Telephone Interviewer Bias In CX Surveys?

How To Avoid Telephone Interviewer Bias In CX Surveys?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: October 19, 2021 * 5 min read

It is becoming difficult to reach the target audience using telephone interviewing. Although the systems can accommodate open-ended responses, capturing them requires interviewers to have accurate skills.

CATI stands for Computer Assisted Telephone Interviewing. Just as computers have replaced the clipboard and questionnaire in face-to-face fieldwork, CATI has replaced traditional telephone interviews. It is best to structure interviews carried out in large numbers where all possible answers have been worked out.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

We will discuss the reasons and the impact of the CATI bias. First, we understand the two types of bias that are:

  • A real person – It is helpful to have a real person because people talk more with the real person. You can get more feedback, but it will be moderated and will be less extreme and explicit.

  • Transcription – The second type of bias is the transcription bias because:

  • What your customer is saying is not one-to-one transcribed. This is how 99% of the telephone interviews work today.

  • In most call centers, the interview is manually categorizing the answer in real-time. So, it is bad as they need to work under time pressure that significantly reduces failures. To categorize in real-time, you can not have more than ten categories, and with ten categories, you can do nothing specific. So, there is no mechanism to ensure that every interview takes place in the same way.

  • If the interview is manually transcribed, it will be a simplified representation of the true feedback.

  • It is nearly impossible to keep the coding among many interviewers consistent, even with good training. Let’s say if 20% of people don’t like your service, you need to know what it means. Is it waiting time? Is it friendliness? Is it the way they are approached? Is it too many cold emails? Whatever it is, you need to know what bothers them to act so you can get better. 

  • Result – The phone interviews are largely biased and simplified. That’s why when we take this kind of feedback into seeking analytics, we see entirely different results because it is simplified. If you use predictive analytics, you will get accurate results. But in this case, the predictive power is lower. You can think of a telephone interview channel as a good channel because what else could be better than having a person speaking with your customer. It’s good for your customer and good for you because you can learn what they are saying exactly. In short, you can manually listen to an active listener.

Join the World's #1 "CX Analytics Masters" Course

How To Avoid The Biases?

You can use the following techniques to avoid the CATI interview bias. 

  • Use the power of CATI interviews, which means you don’t need to apply automated active listening approaches as it is not always optimal. It’s much better to have a human, and you can affirm and clarify what the customer means by saying:

“I get you.” 

Or if you don’t comprehend anything, you can ask:

“What do you mean?”

So, these phrases of active listening have low influence in terms of what the customers will say but still probe them to talk more.

  • You need to standardize your active listening approach using prompts to avoid bias. It is better to ask questions so that the customer can elaborate, like what do you mean? Etc. So, there are some standard questions you can use and which are pretty easy to train. 

It is a MUST to replace manual transcription with:

  • The AI-powered automated transcription is available as API-based cloud services. You can transcribe many different languages your customers are saying, and it is not expensive to program.

The AI-powered categorization is available as API-based cloud services. You can categorize in real-time what your customers are saying. The same algorithm we use to categorize the feedback is real-time capable. So, you need to do this little automation to improve your feedback quality.

Speed Training LM

SPEED-TRAINING: Reinvent Your CX Analytics and Win C-Suite by Storm

Crystal Clear CX-Insights with 4X Impact of Actions

60-minute Condensed Wisdom from The World's #1 CX Analytics Course

*Book your slot now, last few seats left*

In a Nutshell

So far, we discussed that telephone interviews are significant because the quality can be very rich. There’s nothing better than having a human speaking with your customers. Using open-ends in this form is very nice, but the bias of the interviews is quite huge. So, prefer combining manual active listening with machine-made transcription and coding.

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How To Avoid Sparse Text Feedback In CX Survey?

How To Avoid Sparse Text Feedback In CX Survey?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: October 19, 2021 * 9 min read

Businesses need to make complete sense of the underlying meaning in user feedback surveys. AI can help detect common patterns in CX survey comments and indicate emerging trends in customer feedback. customer is trying to express.

At first, we are going to learn how we can deal with the sparse text feedback. There are several reasons for sparse text feedback that you CAN NOT control. It is a part of your target group. 

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

What Are The Reasons For Sparse Text Feedback?

The reasons for sparse text feedback are as:

  • Low brand loyalty – If you have low brand loyalty, people do not talk much. On the other hand, if you have a niche product with strong brand loyalty, they will talk more because they are much more involved. So, there’s a natural bias based on your brand. 

  • Low involvement of a person – The same applies to the category. People talk more if it’s a high involvement category and talk less if it is a low involvement category.

  • Recency – The recency factor also takes effect. If something just happened, people are more involved in it as there’s been more stress recently.
  • Frequency of other requests – If there is someone who has lots to do, gets a request for feedback every day or every hour, whatever website he visits, he is prone to not participate so much and does not respond much.
  • Age and other reasons – There are some demographic factors like age, sex, etc. Some people don’t like to write as it is cumbersome for them to type on the phone or computer. All of these factors have an effect that you can hardly control because it’s part of the DNA of your customers. 

There are some reasons that you CAN control, and they are as:

  • Change order –  The first reason you can control is the order where you ask the open-end. If you ask the open-end at the end of the questionnaire with lots of close-ended items asking for feedback, people feel they have sent everything already. In short, the open-end comes AFTER other items as an add-on. If it does not, then you have to change the order.

  • Change the channel – It is a fact that people talk less and write less in online questionnaires. For instance, if there is a telephone channel where they feel more social pressure, it is impolite to say nothing. So, you may consider the channel change option if you have a problem with the sparse feedback.

  • Priming by none – If you ask for feedback in an open-ended field, the empty field makes people believe that any input is better than none. What you can do is you can give an example or even fill out an example with which you can prime.

Join the World's #1 "CX Analytics Masters" Course

How To Leverage Audio And Video Feedback?

Audio and video feedback give richer feedback as compared to text feedback. There are many studies around that, but according to the rule of thumb, 

“They give at least 2X (double) more feedback than text feedback.”

It depends on the context, and the field is very much evolving because customers get more and more used to those kinds of feedback, and they become more tech-savvy. Some years ago, a study showed that the feedback that was received by 15% of the customers doubled in the year after as more people participated.

The acceptance of giving feedback via mobile is rapidly increasing, and it will be a standard soon. It is because you can have everything with the push of a button, unlike desktops where you do not know whether they have a webcam and microphone on or muted. So, all these things are barriers, but typically they DON’T exist on a mobile phone. So, here we have a high likelihood to get feedback.

Do you know why the audio or the video feedback works? It works because it implies social pressure. So if you record something, you feel someone will listen to it. The process is as:

  • You ask for audio or video feedback, and you need to test it out to make it mandatory. There will be a lower response rate, but how much lower needs to be tested out.

  • The recording can go in real-time to auto-transcription service. You record the audio, and it goes in real-time to a service that gives you back the text. 

  • The text can be automatically translated to the core language (by use of cloud services) in which you want to use that text.

  • The text can also be run in the cloud service that reads emotions and demographics like the customer’s age from the video.

  • You can use cloud services to read emotional components from the text.

  • You can use customized cloud services to categorize the text and the topics.
What Is Active Listening?

Active listening is evolving the tech and topic methodology you should use for your open-ends. But what is active listening? It is an adaptive real-time individual response to feedback to foster customer elaborating feedback. So, you get more of it as it is much more than a pre-formulated question.

But why does active listening work? It works because of the following reasons.

  • It is a positive affirmation so that the customer as a respondent is heard. 
  • The customer feels as if there is someone who values his input. So, it primes his expectations.
  • There is a kind of social pressure if someone writes more and gives a response.

There are two different approaches or implementation examples of active listening we see in the market. They are as:

  • In-field probing – There is some probing within the open-end field.
  • Chatbot-type – This technique uses a chatbot where people answer, and it adaptively responds to them.
Speed Training LM

SPEED-TRAINING: Reinvent Your CX Analytics and Win C-Suite by Storm

Crystal Clear CX-Insights with 4X Impact of Actions

60-minute Condensed Wisdom from The World's #1 CX Analytics Course

*Book your slot now, last few seats left*

Consider an example of in-field probing below:

This example shows when people write some short text; the active listening approach pops in what’s good. A meter (as shown below) also shows how detailed your text is and primes you how good you are.

So, this is one approach to active listening, and it depends a little bit on the complexity of your inputs. It can go wrong if you wrongly categorize what has been said.

There is another chatbot type of technique that is a little bit more foolproof. So, there is an open-end, and someone writes something in it and submits it. Then, the chatbot pops up and says:

Hey, I’m a bot and I didn’t understand what you actually wrote. So, the person can write better as the bot has the option to be authentic and open. It can also ask whether it has understood the information correctly.

Even though the chatbot categorizes well what has been said, the respondents feel that it is not hundred percent true. They need to write more specifically about what they meant. So, this way, this feedback gives more power to probe for more feedback, and it is a dialogue conversation that doubles the number of topics mentioned. 

In a Nutshell

So far, we discussed that open-ends often result in scar responses, so you have to do something against it. You need to make sure that you apply standard rules to get your customers talking. Further, you need to collect audio or video feedback that can be translated into text and categorized into topics. 

Use active listening as it also applies to audio and video feedback. You should also know that better unstructured feedback is the most customer-centric way of collecting feedback, and that’s why it is really important to collect a lot of it.

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How To Avoid And Control Selection Bias?

How To Avoid And Control Selection Bias?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: October 16, 2021 * 9 min read

Selection bias occurs when you are not fully capable of selecting samples without bias. The ones who are aware of their infallibility can benefit from the use of biases. Otherwise, these subconscious biases can distort statistical analysis and outcomes.

Let’s first understand what selection bias is. So, selection bias is when customers who take part in the survey are fundamentally different from those who don’t. We want to survey all possible customers who have experience with our brand. So, it might be distorted and can cause bias in the results. 

Further, the motivation to participate can be linked to what matters to the customers. It’s dangerous, and when it’s linked, the whole service or product is essential to the one who takes part in the survey.

Typically more loyal customers have a higher tendency to participate in the survey than the customers who are not loyal to your brand. It is logical as they do you a favor by answering the questions. So, you can think of a survey as an assignment about the customers’ loyalty.

Get your FREE hardcopy of the “CX Insights Manifesto”

FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.

When Is Selection Bias Harmful?
  • Selection bias may whitewash your descriptives like feedback and ratings based on positive feedback as they are the general result of the selection bias when people have the option to participate or not. It’s not a problem when the selection bias is constant. It can also distort the driver analysis but only if something specific happens. In general, it does not influence the harmful driver analysis outcomes.

  • Selection bias is harmful when the reason to participate is logically linked to both of the following entities:

    • The outcome metrics (NPS rating)
    • One or more selected topics

Let us consider an example in this case. Talkative people love to talk and express themselves. So, they tend to take part in the surveys as they love to talk. You can imagine that they are more likely to appreciate personal service as it’s important to them. It means if more talkative people participate, they will rate your brand higher as they have a better judgment on the surveys. Therefore, you can detect a predictive relationship between the topic of personal service and NPS.

What Are The Strategies To Reduce Selection Bias?

There are two strategies to reduce the selection bias, and they are as:

  • Increase reply rates – The higher the reply rates, the higher the percentage of ordinary people is included. So, you want to get a higher reply rate not to get just more data but also to have a more representative sample.  Following are the ways to have a high reply rate.

    • For email outreach – You can achieve email outreach by sending many emails and getting a high reply rate by more touches on them.

    • For popup/website – Your website should be eye-catching and attractive to gain the attention of the customers and get high reply rates.

    • Set incentives – Incentives are a way of improving the reply rates. If incentives are linked to your service, you can track those customers who require surveys. So, you can attract a certain subgroup of your customers if you have incentives that are NOT related to the drivers or outcomes.

    • Phone or in-person survey – You can change the modes of the surveys. For instance, there can be a phone or in-person surveys. Social pressure makes people not stay quiet. But it sounds a bit negative. It is because people try to be polite. When they have a person speaking with them, they are less inclined to say NO.  

  • Measure bias with modeling – The second method to work on the reply rates is modeling. You can measure the bias with modeling, and you can also clean results from it and show how it works in a minute.

Join the World's #1 "CX Analytics Masters" Course

Why Comparing Detractors with Promoters is Misleading?

The promoters are your loyal customers who are most likely to suggest your services or products to others. A person most likely tries a service when it is suggested by a friend or an acquaintance instead of being suggested through promotions or advertisements. So, you need to keep your promoters happy once they are identified. 

On the contrary, detractors are the customers who are dissatisfied with your services or products and are most likely to give negative feedback. So, you need to improve their experience to avoid a domino effect of bad referrals.

We need an approach to find out which topics are important. So, the first thing we may think of is comparing promoters with detractors. This is because we as humans learn by correlating concepts and ideas. So, we compare promoters with detractors as we are interested in finding the difference between the two. However, different problems arise when we try to find the difference between the two, and they are as:

  • Problem#1 – Wrong Signals: While comparing promoters with detractors, we try different things to see whether or not an impact evolves. We assume that there is a causal reason that may help us in finding the difference, but there’s a large risk that it can give wrong signals. It’s because it is a correlation exercise and not causation. In causation, there is a causal relationship between the two topics or concepts. However, the two concepts correlating doesn’t necessarily mean that one causes the other. So, correlation is not causation.
  • Problem#2 – Lack of Differentiation: There always exists a lack of differentiation because every topic correlates. It means that we relate the topics so much that we are unable to find the difference between the two. Both of the topics differ, but a high correlation prevents us from knowing that. For instance, when we compare promoters with detractors, we may find that every positive feedback is often in the detractive group as well as the promoter group. 

  • Problem#3 – Wrong Directions: Due to large correlation, everything seems to be equally important, so it really becomes hard to find the key. Consequently, we may get wrong indications or directions regarding the difference between the topics. For instance, it’s not always the case that promoters give positive feedback on your products or services, and the same goes for detractors.
What Is a Proper Reply Rate?

You may ask what a proper reply rate is. The right answer for you is the more the reply rate, the better it is. It is because the reply rates for industries and the customers are very different i-e., they can be 10%, 30%, etc.

So, it depends on the context. But when you ask scientists what a proper reply rate is. They guarantee you to have at least 70% of the reply rate that is unbiased and representative. But in reality, it is impossible. The only way is to have people knocking on doors, calling them, so you reach everyone you want to reach. It’s not only an expensive exercise, but also it may be the case that some customers do not like it. Therefore, it’s typically unrealistic. In short, you need to live with the bias, and you should always consider cleaning the data with analytics.

Speed Training LM

SPEED-TRAINING: Reinvent Your CX Analytics and Win C-Suite by Storm

Crystal Clear CX-Insights with 4X Impact of Actions

60-minute Condensed Wisdom from The World's #1 CX Analytics Course

*Book your slot now, last few seats left*

How To Control The Bias With Modeling?

Let’s discuss how to control the bias with modeling using two different approaches.

  • Hard Approach:

In this approach, you force a subset of clients to answer i-e. by sending survey emails, calling them, and following up again. So, it is a one-time exercise where you have a subset, and you have to make sure that you get over 70% on this subset.

Whether or not the customer belongs to the subset is stored in a binary instrumental variable to compare the subsets’ results.

It can be a one-time effort to check whether the customer belongs to the subset or not.

  • Soft Approach:

You can also control the bias with modeling using a soft approach where you force people to answer as you don’t have a normal approach. But you alter the way of enforcement when you typically send out two emails to your customers. So you have some people that you sent just one email and some that you sent two, three, and so on. This way, you have a variety of customers and pressure.

You can stalk a subset of customers with more emails.

You can store the number of emails sent in an “instrumental” variable. If it’s not email, it can be phone calls that can be stored in a variable.

In short, the soft approach is meant to be ongoing, and the hard approach is a one-time exercise where you want to compare and measure the bias. 

So, the data needs to go into modeling where you have drivers and outcomes. In modeling, you aim to find a formula or a model that can predict outcomes with drivers. For instance, NPS rating is the outcome, and the drivers are the items if you have a close-ended questionnaire or topics if you have an open-ended questionnaire. 

You can have context variables like demographics, segments, etc., along with the instrumental variables. The modeling will find out whether you need the instrumental variables to predict the NPS rating. Again there exists a spurious correlation between the responses of different customer groups. You can avoid it by modeling that finds the distinct contribution of the instrumental variable in predicting the rating. 

Further, machine learning also comes into play as you can expect nonlinearities in the interactions between the variables. So, flexible machine learning gives you the highest predictive power in this case. 

You can use another model to predict the items/topics (drivers) using the instrumental and context variables. But why is it so? It is because you want to know the bias or the selection that influences the drivers and the outcomes. We can find the importance of the topics only if the bias affects BOTH the drivers and the outcomes. If it influences ONLY the outcomes, it is simply adding noise, and the same will be the results of the driver analysis when it just influences the topics.

We can use these two models to determine whether there is mutual information in the instrumental variable that predicts the drivers and the outcomes. This way, we can understand the bias and to which topics this bias applies. Sometimes, you will see that the bias is small, but you don’t need to bother anymore when you learn it. If you use the soft modeling approach, the instrumental variable will be part of the model. It means the impact of the topics/items you find will be the true impact because the bias from the selection is attributed to the selection. 

So, integrating the instrumental variable in a continuous working model will clean the model, and the driver results will be true. When you do it as a one-time exercise, you need to do this for the two models to determine if it influenced both. However, if you do this continuously, you can run any of the models. You can not know which type of bias and the instrumental variable you will have, but you can ensure that the model is cleaned.

Conclusively, if an instrumental variable influences BOTH the drivers (items/topics) and the outcomes (NPS rating), then we have biased results.

In a Nutshell

So far, we discussed that selection bias is when the reason for participation is linked to the answers. It is only harmful when biasing outcomes AND drivers. You can control selection bias with:

  • Better response rates
  • Modeling

Modeling can measure biases and clean results from them. It uses the following two approaches:

  • Hard Approach
  • Soft Approach

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI